نتایج جستجو برای: رگرسیونهای lasso

تعداد نتایج: 4601  

Journal: :EURASIP J. Adv. Sig. Proc. 2011
Jun Zhang Yuanqing Li Zhu Liang Yu Zhenghui Gu

Parameterized quadratic programming (Lasso) is a powerful tool for the recovery of sparse signals based on underdetermined observations contaminated by noise. In this paper, we study the problem of simultaneous sparsity pattern recovery and approximation recovery based on the Lasso. An extended Lasso method is proposed with the following main contributions: (1) we analyze the recovery accuracy ...

Journal: :Statistics and Computing 2015
Yi Yang Hui Zou

This paper concerns a class of group-lasso learning problems where the objective function is the sum of an empirical loss and the group-lasso penalty. For a class of loss function satisfying a quadratic majorization condition, we derive a unified algorithm called groupwisemajorization-descent (GMD) for efficiently computing the solution paths of the corresponding group-lasso penalized learning ...

2009
Sanghee Cho Antony Joseph Kyoung Hee Kim

5 The LASSO 9 5.1 Performance of Lasso estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 5.2 “Normal equations” for the LASSO solutions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 5.3 Facts about Lasso solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 This short document presents the Dantzig Selector, first introd...

2015
Bala Rajaratnam Steven Roberts Doug Sparks Onkar Dalal

The application of the lasso is espoused in high-dimensional settings where only a small number of the regression coefficients are believed to be nonzero (i.e., the solution is sparse). Moreover, statistical properties of high-dimensional lasso estimators are often proved under the assumption that the correlation between the predictors is bounded. In this vein, coordinatewise methods, the most ...

Journal: :Statistica Sinica 2012
Noah Simon Robert Tibshirani

We re-examine the original Group Lasso paper of Yuan and Lin (2007). The form of penalty in that paper seems to be designed for problems with uncorrelated features, but the statistical community has adopted it for general problems with correlated features. We show that for this general situation, a Group Lasso with a different choice of penalty matrix is generally more effective. We give insigh...

2007
Hansheng WANG Guodong LI Guohua JIANG

The least absolute deviation (LAD) regression is a useful method for robust regression, and the least absolute shrinkage and selection operator (lasso) is a popular choice for shrinkage estimation and variable selection. In this article we combine these two classical ideas together to produce LAD-lasso. Compared with the LAD regression, LAD-lasso can do parameter estimation and variable selecti...

2016
Hanzhong Liu Bin Yu

Abstract: We study the asymptotic properties of Lasso+mLS and Lasso+ Ridge under the sparse high-dimensional linear regression model: Lasso selecting predictors and then modified Least Squares (mLS) or Ridge estimating their coefficients. First, we propose a valid inference procedure for parameter estimation based on parametric residual bootstrap after Lasso+ mLS and Lasso+Ridge. Second, we der...

2011
Alexandre Belloni Victor Chernozhukov ALEXANDRE BELLONI

In this paper we study post-model selection estimators which apply ordinary least squares (ols) to the model selected by first-step penalized estimators, typically lasso. It is well known that lasso can estimate the nonparametric regression function at nearly the oracle rate, and is thus hard to improve upon. We show that ols post lasso estimator performs at least as well as lasso in terms of t...

2006
Hansheng Wang Guodong Li Chih-Ling Tsai

The least absolute shrinkage and selection operator (lasso) has been widely used in regression shrinkage and selection. In this article, we extend its application to the REGression model with AutoRegressive errors (REGAR). Two types of lasso estimators are carefully studied. The first is similar to the traditional lasso estimator with only two tuning parameters (one for regression coefficients ...

2014
Jasdeep Pannu

We consider the problem of selecting functional variables using the L1 regularization in a functional linear regression model with a scalar response and functional predictors in the presence of outliers. Since the LASSO is a special case of the penalized least squares regression with L1-penalty function it suffers from the heavy-tailed errors and/or outliers in data. Recently, the LAD regressio...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید