Least angle and ℓ1 penalized regression: A review
نویسندگان
چکیده
منابع مشابه
Least Angle and L1 Regression: A Review
Least Angle Regression is a promising technique for variable selection applications, offering a nice alternative to stepwise regression. It provides an explanation for the similar behavior of LASSO (L1-penalized regression) and forward stagewise regression, and provides a fast implementation of both. The idea has caught on rapidly, and sparked a great deal of research interest. In this paper, w...
متن کاملLeast angle and l 1 penalized regression : A review ∗ †
Least Angle Regression is a promising technique for variable selection applications, offering a nice alternative to stepwise regression. It provides an explanation for the similar behavior of LASSO (l1-penalized regression) and forward stagewise regression, and provides a fast implementation of both. The idea has caught on rapidly, and sparked a great deal of research interest. In this paper, w...
متن کاملL1-norm Penalized Least Squares with Salsa
This lecture note describes an iterative optimization algorithm, ‘SALSA’, for solving L1-norm penalized least squares problems. We describe the use of SALSA for sparse signal representation and approximation, especially with overcomplete Parseval transforms. We also illustrate the use of SALSA to perform basis pursuit (BP), basis pursuit denoising (BPD), and morphological component analysis (MC...
متن کاملLeast Angle Regression
The purpose of model selection algorithms such as All Subsets, Forward Selection, and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regres...
متن کاملA risk ratio comparison of L0 and L1 penalized regression
In the past decade, there has been an explosion of interest in using l1-regularization in place of l0-regularization for feature selection. We present theoretical results showing that while l1-penalized linear regression never outperforms l0-regularization by more than a constant factor, in some cases using an l1 penalty is infinitely worse than using an l0 penalty. We also compare algorithms f...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Statistics Surveys
سال: 2008
ISSN: 1935-7516
DOI: 10.1214/08-ss035