نتایج جستجو برای: lasso

تعداد نتایج: 4548  

Journal: :Annals of statistics 2014
Jianqing Fan Yingying Fan Emre Barut

Heavy-tailed high-dimensional data are commonly encountered in various scientific fields and pose great challenges to modern statistical analysis. A natural procedure to address this problem is to use penalized quantile regression with weighted L1-penalty, called weighted robust Lasso (WR-Lasso), in which weights are introduced to ameliorate the bias problem induced by the L1-penalty. In the ul...

2015
Sandra Stankiewicz

I use the adaptive elastic net in a Bayesian framework and test its forecasting performance against lasso, adaptive lasso and elastic net (all used in a Bayesian framework) in a series of simulations, as well as in an empirical exercise for macroeconomic Euro area data. The results suggest that elastic net is the best model among the four Bayesian methods considered. Adaptive lasso, on the othe...

2008
Jinseog Kim Yuwon Kim Yongdai Kim

LASSO is a useful method for achieving both shrinkage and variable selection simultaneously. The main idea of LASSO is to use the L1 constraint in the regularization step which has been applied to various models such as wavelets, kernel machines, smoothing splines, and multiclass logistic models. We call such models with the L1 constraint generalized LASSO models. In this paper, we propose a ne...

2009
Junzhou Huang Tong Zhang

This paper develops a theory for group Lasso using a concept called strong group sparsity. Our result shows that group Lasso is superior to standard Lasso for strongly group-sparse signals. This provides a convincing theoretical justi cation for using group sparse regularization when the underlying group structure is consistent with the data. Moreover, the theory predicts some limitations of th...

Journal: :Computational Statistics & Data Analysis 2008
Hansheng Wang Chenlei Leng

Group lasso is a natural extension of lasso and selects variables in a grouped manner. However, group lasso suffers from estimation inefficiency and selection inconsistency. To remedy these problems, we propose the adaptive group lasso method. We show theoretically that the new method is able to identify the true model consistently, and the resulting estimator can be as efficient as oracle. Num...

2010
Xiaoli Gao Jian Huang XIAOLI GAO JIAN HUANG

The Lasso is an attractive approach to variable selection in sparse, highdimensional regression models. Much work has been done to study the selection and estimation properties of the Lasso in the context of least squares regression. However, the least squares based method is sensitive to outliers. An alternative to the least squares method is the least absolute deviations (LAD) method which is...

2015
Lorenzo Camponovo

We study the validity of the pairs bootstrap for Lasso estimators in linear regression models with random covariates and heteroscedastic error terms. We show that the naive pairs bootstrap does not consistently estimate the distribution of the Lasso estimator. In particular, we identify two different sources for the failure of the bootstrap. First, in the bootstrap samples the Lasso estimator f...

2011
Derek Bean Peter Bickel Noureddine El Karoui Chinghway Lim Bin Yu

We discuss the behavior of penalized robust regression estimators in high-dimension and compare our theoretical predictions to simulations. Our results show the importance of the geometry of the dataset and shed light on the theoretical behavior of LASSO and much more involved methods.

Journal: :Chemical communications 2012
Si Jia Pan Jakub Rajniak Mikhail O Maksimov A James Link

The conserved threonine (Thr) residue in the penultimate position of the leader peptide of lasso peptides microcin J25 and capistruin can be effectively replaced by several amino acids close in size and shape to Thr. These findings suggest a model for lasso peptide biosynthesis in which the Thr sidechain is a recognition element for the lasso peptide maturation machinery.

Journal: :Journal of Machine Learning Research 2013
Tingni Sun Cun-Hui Zhang

We propose a new method of learning a sparse nonnegative-definite target matrix. Our primary example of the target matrix is the inverse of a population covariance or correlation matrix. The algorithm first estimates each column of the target matrix by the scaled Lasso and then adjusts the matrix estimator to be symmetric. The penalty level of the scaled Lasso for each column is completely dete...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید