Fast Multiple Kernel Learning With Multiplicative Weight Updates

نویسندگان

  • John Moeller
  • Parasaran Raman
  • Avishek Saha
  • Suresh Venkatasubramanian
چکیده

We present a fast algorithm for multiple kernel learning (MKL). Our matrix multiplicative weight update (MWUMKL) algorithm is based on a well-known QCQP formulation [5]. In addition, we propose a novel fast matrix exponentiation routine for QCQPs which might be of independent interest. Our method avoids the use of commercial nonlinear solvers and scales efficiently to large data sets. 1

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Path Kernels and Multiplicative Updates

Kernels are typically applied to linear algorithms whose weight vector is a linear combination of the feature vectors of the examples. On-line versions of these algorithms are sometimes called “additive updates” because they add a multiple of the last feature vector to the current weight vector. In this paper we have found a way to use special convolution kernels to efficiently implement “multi...

متن کامل

Multiplicative updates for non-negative projections

We present here how to construct multiplicative update rules for non-negative projections based on Oja’s iterative learning rule. Our method integrates the multiplicative normalization factor into the original additive update rule as an additional term which generally has a roughly opposite direction. As a consequence, the modified additive learning rule can easily be converted to its multiplic...

متن کامل

Multiplicative updates For Non-Negative Kernel SVM

We present multiplicative updates for solving hard and soft margin support vector machines (SVM) with non-negative kernels. They follow as a natural extension of the updates for non-negative matrix factorization. No additional parameter setting, such as choosing learning, rate is required. Experiments demonstrate rapid convergence to good classifiers. We analyze the rates of asymptotic converge...

متن کامل

Ensembles of Partially Trained SVMs with Multiplicative Updates

The training of support vector machines (SVM) involves a quadratic programming problem, which is often optimized by a complicated numerical solver. In this paper, we propose a much simpler approach based on multiplicative updates. This idea was first explored in [Cristianini et al., 1999], but its convergence is sensitive to a learning rate that has to be fixed manually. Moreover, the update ru...

متن کامل

Introduction to the Special Issue on Learning Theory

This special issue builds on material presented at the Fifteens Annual Conference on Computational Learning Theory (COLT 2002) and the Sixteenth Annual Conference on Neural Information Processing Systems (NIPS*2002), held in Sydney in July 2002 and in Vancouver in December 2002, respectively. We contacted authors who presented work on the analysis of learning in general, and with a particular f...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1206.5580  شماره 

صفحات  -

تاریخ انتشار 2012