Non-Sparse Regularization for Multiple Kernel Learning
نویسندگان
چکیده
Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, this `1-norm MKL is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures, we generalize MKL to arbitrary norms. We devise new insights on the connection between several existing MKL formulations and develop two efficient interleaved optimization strategies for arbitrary norms, like `p-norms with p > 1. Empirically, we demonstrate that the interleaved optimization strategies are much faster compared to the commonly used wrapper approaches. A theoretical analysis and an experiment on controlled artificial data experiment sheds light on the appropriateness of sparse, non-sparse and `∞-norm MKL in various scenarios. Empirical applications of `p-norm MKL to three real-world problems from computational biology show that nonsparse MKL achieves accuracies that go beyond the state-of-the-art.
منابع مشابه
Unifying Framework for Fast Learning Rate of Non-Sparse Multiple Kernel Learning
In this paper, we give a new generalization error bound of Multiple Kernel Learning (MKL) for a general class of regularizations. Our main target in this paper is dense type regularizations including lp-MKL that imposes lp-mixed-norm regularization instead of l1-mixed-norm regularization. According to the recent numerical experiments, the sparse regularization does not necessarily show a good p...
متن کاملNon-Sparse Regularization and Efficient Training with Multiple Kernels
Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, this `1-norm MKL is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtur...
متن کاملMultiple Kernel Learning for Object Classification
Combining information from various image descriptors has become a standard technique for image classification tasks. Multiple kernel learning (MKL) approaches allow to determine the optimal combination of such similarity matrices and the optimal classifier simultaneously. Most MKL approaches employ an `-regularization on the mixing coefficients to promote sparse solutions; an assumption that is...
متن کاملFast Learning Rate of Multiple Kernel Learning: Trade-Off between Sparsity and Smoothness
We investigate the learning rate of multiple kernel leaning (MKL) with l1 and elastic-net regularizations. The elastic-net regularization is a composition of an l1-regularizer for inducing the sparsity and an l2-regularizer for controlling the smoothness. We focus on a sparse setting where the total number of kernels is large but the number of non-zero components of the ground truth is relative...
متن کاملVariable Sparsity Kernel Learning Variable Sparsity Kernel Learning
This paper presents novel algorithms and applications for a particular class of mixed-norm regularization based Multiple Kernel Learning (MKL) formulations. The formulations assume that the given kernels are grouped and employ l1 norm regularization for promoting sparsity within RKHS norms of each group and lq, q ≥ 2 norm regularization for promoting non-sparse combinations across groups. Vario...
متن کامل