Fast multiple instance learning via L1, 2 logistic regression

نویسندگان

  • Zhouyu Fu
  • Antonio Robles-Kelly
چکیده

In this paper, we develop an efficient logistic regression model for multiple instance learning that combines L1 andL2 regularisation techniques. AnL1 regularised logistic regression model is first learned to find out the sparse pattern of the features. To train the L1 model efficiently, we employ a convex differentiable approximation of the L1 cost function which can be solved by a quasi Newton method. We then train an L2 regularised logistic regression model only on the subset of features with nonzero weights returned by the L1 logistic regression. Experimental results demonstrate the utility and efficiency of the proposed approach compared to a number of alternatives.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Cancer Detection with Multiple Radiologists via Soft Multiple Instance Logistic Regression and L1 Regularization

This paper deals with the multiple annotation problem in medical application of cancer detection in digital images. The main assumption is that though images are labeled by many experts, the number of images read by the same expert is not large. Thus differing with the existing work on modeling each expert and ground truth simultaneously, the multi annotation information is used in a soft manne...

متن کامل

Fast Implementation of l 1 Regularized Learning Algorithms Using Gradient Descent Methods ∗

With the advent of high-throughput technologies, l1 regularized learning algorithms have attracted much attention recently. Dozens of algorithms have been proposed for fast implementation, using various advanced optimization techniques. In this paper, we demonstrate that l1 regularized learning problems can be easily solved by using gradient-descent techniques. The basic idea is to transform a ...

متن کامل

Fast Implementation of ℓ1Regularized Learning Algorithms Using Gradient Descent Methods

With the advent of high-throughput technologies, l1 regularized learning algorithms have attracted much attention recently. Dozens of algorithms have been proposed for fast implementation, using various advanced optimization techniques. In this paper, we demonstrate that l1 regularized learning problems can be easily solved by using gradient-descent techniques. The basic idea is to transform a ...

متن کامل

A Fast Hybrid Algorithm for Large-Scale l1-Regularized Logistic Regression

l1-regularized logistic regression, also known as sparse logistic regression, is widely used in machine learning, computer vision, data mining, bioinformatics and neural signal processing. The use of l1 regularization attributes attractive properties to the classifier, such as feature selection, robustness to noise, and as a result, classifier generality in the context of supervised learning. W...

متن کامل

Distributed Coordinate Descent for L1-regularized Logistic Regression

Solving logistic regression with L1-regularization in distributed settings is an important problem. This problem arises when training dataset is very large and cannot fit the memory of a single machine. We present d-GLMNET, a new algorithm solving logistic regression with L1-regularization in the distributed settings. We empirically show that it is superior over distributed online learning via ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008