Error Bounds forlp-Norm Multiple Kernel Learning with Least Square Loss
نویسندگان
چکیده
منابع مشابه
Improved Loss Bounds For Multiple Kernel Learning
We propose two new generalization error bounds for multiple kernel learning (MKL). First, using the bound of Srebro and BenDavid (2006) as a starting point, we derive a new version which uses a simple counting argument for the choice of kernels in order to generate a tighter bound when 1-norm regularization (sparsity) is imposed in the kernel learning problem. The second bound is a Rademacher c...
متن کاملReproducing Kernel Banach Spaces with the ℓ1 Norm II: Error Analysis for Regularized Least Square Regression
A typical approach in estimating the learning rate of a regularized learning scheme is to bound the approximation error by the sum of the sampling error, the hypothesis error and the regularization error. Using a reproducing kernel space that satisfies the linear representer theorem brings the advantage of discarding the hypothesis error from the sum automatically. Following this direction, we ...
متن کاملError analysis of regularized least-square regression with Fredholm kernel
Learning with Fredholm kernel has attracted increasing attention recently since it can effectively utilize the data information to improve the prediction performance. Despite rapid progress on theoretical and experimental evaluations, its generalization analysis has not been explored in learning theory literature. In this paper, we establish the generalization bound of least square regularized ...
متن کامل`p-Norm Multiple Kernel Learning
Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, this `1-norm MKL is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtur...
متن کاملlp-Norm Multiple Kernel Learning
Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, this l1-norm MKL is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtur...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Abstract and Applied Analysis
سال: 2012
ISSN: 1085-3375,1687-0409
DOI: 10.1155/2012/915920