Kernel Learning by Unconstrained Optimization
نویسندگان
چکیده
We study the problem of learning a kernel matrix from an apriori kernel and training data. An unconstrained convex optimization formulation is proposed, with an arbitrary convex smooth loss function on kernel entries and a LogDet divergence for regularization. Since the number of variables is of order O(n), standard Newton and quasi-Newton methods are too time-consuming. An operator form Hessian is used to develop an O(n) trust-region inexact Newton method, where the Newton direction is computed using several conjugate gradient steps on the Hessian operator equation. On the uspst dataset, our algorithm can handle 2 million optimization variables within one hour. Experiments are shown for both linear (Mahalanobis) metric learning and for kernel learning. The convergence rate, speed and performance of several loss functions and algorithms are discussed.
منابع مشابه
Improving Unconstrained Iris Recognition Performance via Domain Adaptation Metric Learning Method
To improve unconstrained iris recognition system performance in different environments, a performance improvement method of unconstrained iris recognition based on domain adaptation metric learning is proposed. A kernel matrix is calculated as the solution of domain adaptation metric learning. The known Hamming distance computing by intra-class and inter-class is used as the optimization learni...
متن کاملMulti-label Multiple Kernel Learning
We present a multi-label multiple kernel learning (MKL) formulation in which the data are embedded into a low-dimensional space directed by the instancelabel correlations encoded into a hypergraph. We formulate the problem in the kernel-induced feature space and propose to learn the kernel matrix as a linear combination of a given collection of kernel matrices in the MKL framework. The proposed...
متن کاملEfficient Output Kernel Learning for Multiple Tasks
The paradigm of multi-task learning is that one can achieve better generalization by learning tasks jointly and thus exploiting the similarity between the tasks rather than learning them independently of each other. While previously the relationship between tasks had to be user-defined in the form of an output kernel, recent approaches jointly learn the tasks and the output kernel. As the outpu...
متن کاملComposite Kernel Optimization in Semi-Supervised Metric
Machine-learning solutions to classification, clustering and matching problems critically depend on the adopted metric, which in the past was selected heuristically. In the last decade, it has been demonstrated that an appropriate metric can be learnt from data, resulting in superior performance as compared with traditional metrics. This has recently stimulated a considerable interest in the to...
متن کاملVarious Hyperplane Classifiers Using Kernel Feature Spaces
In machine learning the classification approach may be linear or nonlinear, but it seems that by using the so-called kernel idea, linear methods can be readily generalized to the nonlinear ones. The key idea was originally presented in Aizermann’s paper [1] and it was successfully renewed in the context of the ubiquitous Support Vector Machines (SVM) [2]. The roots of SV methods can be traced b...
متن کامل