A Note on Improved Loss Bounds for Multiple Kernel Learning
نویسندگان
چکیده
The paper [5] presented a bound on the generalisation error of classifiers learned through multiple kernel learning. The bound has (an improved) additive dependence on the number of kernels (with the same logarithmic dependence on this number). However, parts of the proof were incorrectly presented in that paper. This note remedies this weakness by restating the problem and giving a detailed proof of the Rademacher complexity bound from [5].
منابع مشابه
Improved Loss Bounds For Multiple Kernel Learning
We propose two new generalization error bounds for multiple kernel learning (MKL). First, using the bound of Srebro and BenDavid (2006) as a starting point, we derive a new version which uses a simple counting argument for the choice of kernels in order to generate a tighter bound when 1-norm regularization (sparsity) is imposed in the kernel learning problem. The second bound is a Rademacher c...
متن کاملNeural Network-Based Learning Kernel for Automatic Segmentation of Multiple Sclerosis Lesions on Magnetic Resonance Images
Background: Multiple Sclerosis (MS) is a degenerative disease of central nervous system. MS patients have some dead tissues in their brains called MS lesions. MRI is an imaging technique sensitive to soft tissues such as brain that shows MS lesions as hyper-intense or hypo-intense signals. Since manual segmentation of these lesions is a laborious and time consuming task, automatic segmentation ...
متن کاملGeneralization Guarantees for a Binary Classi cation Framework for Two-Stage Multiple Kernel Learning
We present generalization bounds for the TS-MKL framework for two stage multiple kernel learning. We also present bounds for sparse kernel learning formulations within the TS-MKL framework.
متن کاملGeneralization Guarantees for a Binary Classification Framework for Two-Stage Multiple Kernel Learning
We present generalization bounds for the TS-MKL framework for two stage multiple kernel learning. We also present bounds for sparse kernel learning formulations within the TS-MKL framework.
متن کاملMultitask Multiple Kernel Learning
We present a general regularization-based framework for Multi-task learning (MTL), in which the similarity between tasks can be learned or refined using `pnorm Multiple Kernel learning (MKL). Based on this very general formulation (including a general loss function), we derive the corresponding dual formulation using Fenchel duality applied to Hermitian matrices. We show that numerous establish...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1106.6258 شماره
صفحات -
تاریخ انتشار 2011