نتایج جستجو برای: روش انقباضی lasso

تعداد نتایج: 374444  

2011
SYLVAIN SARDY

Smooth James-Stein thresholding-based estimators enjoy smoothness like ridge regression and perform variable selection like lasso. They have added flexibility thanks to more than one regularization parameters (like adaptive lasso), and the ability to select these parameters well thanks to a unbiased and smooth estimation of the risk. The motivation is a gravitational wave burst detection proble...

2007
Lukas Meier Sara van de Geer Peter Bühlmann

The group lasso is an extension of the lasso to do variable selection on (predefined) groups of variables in linear regression models. The estimates have the attractive property of being invariant under groupwise orthogonal reparameterizations. We extend the group lasso to logistic regression models and present an efficient algorithm, that is especially suitable for high dimensional problems, w...

2006
Jianfeng Gao Hisami Suzuki Bin Yu

Lasso is a regularization method for parameter estimation in linear models. It optimizes the model parameters with respect to a loss function subject to model complexities. This paper explores the use of lasso for statistical language modeling for text input. Owing to the very large number of parameters, directly optimizing the penalized lasso loss function is impossible. Therefore, we investig...

Journal: :Journal of machine learning research : JMLR 2012
Rahul Mazumder Trevor J. Hastie

We consider the sparse inverse covariance regularization problem or graphical lasso with regularization parameter λ. Suppose the sample covariance graph formed by thresholding the entries of the sample covariance matrix at λ is decomposed into connected components. We show that the vertex-partition induced by the connected components of the thresholded sample covariance graph (at λ) is exactly ...

2011
Sophie Lambert-Lacroix Laurent Zwald

The Huber’s Criterion is a useful method for robust regression. The adaptive least absolute shrinkage and selection operator (lasso) is a popular technique for simultaneous estimation and variable selection. The adaptive weights in the adaptive lasso allow to have the oracle properties. In this paper we propose to combine the Huber’s criterion and adaptive penalty as lasso. This regression tech...

2004
Greg Ridgeway

Algorithms for simultaneous shrinkage and selection in regression and classification provide attractive solutions to knotty old statistical challenges. Nevertheless, as far as we can tell, Tibshirani’s Lasso algorithm has had little impact on statistical practice. Two particular reasons for this may be the relative inefficiency of the original Lasso algorithm and the relative complexity of more...

2010
Minjung Kyung Jeff Gill Malay Ghosh George Casella

Penalized regression methods for simultaneous variable selection and coefficient estimation, especially those based on the lasso of Tibshirani (1996), have received a great deal of attention in recent years, mostly through frequentist models. Properties such as consistency have been studied, and are achieved by different lasso variations. Here we look at a fully Bayesian formulation of the prob...

2011
Marco F. Duarte Waheed U. Bajwa Robert Calderbank

The lasso [19] and group lasso [23] are popular algorithms in the signal processing and statistics communities. In signal processing, these algorithms allow for efficient sparse approximations of arbitrary signals in overcomplete dictionaries. In statistics, they facilitate efficient variable selection and reliable regression under the linear model assumption. In both cases, there is now ample ...

2009
Peter Radchenko Gareth M. James

Both classical Forward Selection and the more modern Lasso provide computationally feasible methods for performing variable selection in high dimensional regression problems involving many predictors. We note that although the Lasso is the solution to an optimization problem while Forward Selection is purely algorithmic, the two methods turn out to operate in surprisingly similar fashions. Our ...

2012
Ryan J. Tibshirani

The lasso is a popular tool for sparse linear regression, especially for problems in which the number of variables p exceeds the number of observations n. But when p > n, the lasso criterion is not strictly convex, and hence it may not have a unique minimum. An important question is: when is the lasso solution well-defined (unique)? We review results from the literature, which show that if the ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید