On convergence of kernel learning estimators

dc.contributor.authorNorkin, Vladimir I.
dc.contributor.authorKeyzer, Michiel A.
dc.date.accessioned2025-07-03T21:18:13Z
dc.date.issued2008
dc.departmentBalıkesir Üniversitesi
dc.description20th International Conference EURO Mini Conference: Continuous Optimization and Knowledge-Based Technologies, EurOPT 2008 -- 20 May 2008 through 23 May 2008 -- Neringa --114738
dc.description.abstractThe paper studies kernel regression learning from stochastic optimization and ill-posedness point of view. Namely, the convergence properties of kernel learning estimators are investigated under a gradual elimination of the regularization parameter with rising number of observations. We derive computable non-asymptotic bounds on the deviation of the expected risk from its best possible value and obtain an optimal value for the regularization parameter that minimizes these bounds. We establish conditions for almost sure convergence of function estimates, jointly with a rule for downward adjustment of the regularization factor with increasing sample size. © Institute of Mathematics and Informatics, 2008.
dc.identifier.endpage310
dc.identifier.isbn978-995528283-9
dc.identifier.scopus2-s2.0-67651226007
dc.identifier.scopusqualityN/A
dc.identifier.startpage306
dc.identifier.urihttps://hdl.handle.net/20.500.12462/21241
dc.indekslendigikaynakScopus
dc.language.isoen
dc.publisherVilnius Gediminas Technical University
dc.relation.ispartof20th International Conference EURO Mini Conference "Continuous Optimization and Knowledge-Based Technologies", EurOPT 2008
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/closedAccess
dc.snmzKA_Scopus_20250703
dc.subjectConsistency
dc.subjectIll-posedness regularization
dc.subjectKernel learning
dc.subjectQuantile regression
dc.subjectStochastic optimization
dc.titleOn convergence of kernel learning estimators
dc.typeConference Object

Dosyalar