On convergence of kernel learning estimators

Yükleniyor...
Küçük Resim

Tarih

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Vilnius Gediminas Technical University

Erişim Hakkı

info:eu-repo/semantics/closedAccess

Özet

The paper studies kernel regression learning from stochastic optimization and ill-posedness point of view. Namely, the convergence properties of kernel learning estimators are investigated under a gradual elimination of the regularization parameter with rising number of observations. We derive computable non-asymptotic bounds on the deviation of the expected risk from its best possible value and obtain an optimal value for the regularization parameter that minimizes these bounds. We establish conditions for almost sure convergence of function estimates, jointly with a rule for downward adjustment of the regularization factor with increasing sample size. © Institute of Mathematics and Informatics, 2008.

Açıklama

20th International Conference EURO Mini Conference: Continuous Optimization and Knowledge-Based Technologies, EurOPT 2008 -- 20 May 2008 through 23 May 2008 -- Neringa --114738

Anahtar Kelimeler

Consistency, Ill-posedness regularization, Kernel learning, Quantile regression, Stochastic optimization

Kaynak

20th International Conference EURO Mini Conference "Continuous Optimization and Knowledge-Based Technologies", EurOPT 2008

WoS Q Değeri

Scopus Q Değeri

Cilt

Sayı

Künye

Onay

İnceleme

Ekleyen

Referans Veren