نتایج جستجو برای: روش lasso

تعداد نتایج: 374083  

2012
Mohamed Hebiri Sara van de Geer

We consider a linear regression problem in a high dimensional setting where the number of covariates p can be much larger than the sample size n. In such a situation, one often assumes sparsity of the regression vector, i.e., the regression vector contains many zero components. We propose a Lasso-type estimator β̂ (where ‘Quad’ stands for quadratic) which is based on two penalty terms. The first...

2006
Hsin-Cheng Huang Nan-Jung Hsu David Theobald Jay Breidt

Geographic information systems (GIS) organize spatial data in multiple two-dimensional arrays called layers. In many applications, a response of interest is observed on a set of sites in the landscape, and it is of interest to build a regression model from the GIS layers to predict the response at unsampled sites. Model selection in this context then consists not only of selecting appropriate l...

2016
Deguang Kong Ryohei Fujimaki Ji Liu Feiping Nie Chris Ding

Group LASSO is widely used to enforce the structural sparsity, which achieves the sparsity at the inter-group level. In this paper, we propose a new formulation called “exclusive group LASSO”, which brings out sparsity at intra-group level in the context of feature selection. The proposed exclusive group LASSO is applicable on any feature structures, regardless of their overlapping or non-overl...

2013
Huijiang Gao Jiahan Li Hongwang Li Junya Li

Previous genome-wide association study (GWAS) focused on low-order interactions between pairwise single-nucleotide polymorphisms (SNPs) with significant main effects. Little is known how high-order interactions effect, especially one among the SNPs without main effects regulates quantitative traits.Within the frameworks of linear model and generalized linear model, the LASSO with coordinate des...

1999
Michael R. Osborne Brett Presnell Berwin A. Turlach

Proposed by Tibshirani (1996), the LASSO (least absolute shrinkage and selection operator) estimates a vector of regression coefficients by minimising the residual sum of squares subject to a constraint on the l-norm of coefficient vector. The LASSO estimator typically has one or more zero elements and thus shares characteristics of both shrinkage estimation and variable selection. In this pape...

2017
Niharika Gauraha Swapan K. Parui

We consider variable selection problems in high dimensional sparse regression models with strongly correlated variables. To handle correlated variables, the concept of clustering or grouping variables and then pursuing model fitting is widely accepted. When the dimension is very high, finding an appropriate group structure is as difficult as the original problem. We propose to use Elastic-net a...

2011
Ryan J. Tibshirani Jonathan Taylor

We present a path algorithm for the generalized lasso problem. This problem penalizes the `1 norm of a matrix D times the coefficient vector, and has a wide range of applications, dictated by the choice of D. Our algorithm is based on solving the dual of the generalized lasso, which facilitates computation and conceptual understanding of the path. For D = I (the usual lasso), we draw a connecti...

2012
Kang-Mo Jung

The linear absolute shrinkage and selection operator(Lasso) method improves the low prediction accuracy and poor interpretation of the ordinary least squares(OLS) estimate through the use of L1 regularization on the regression coefficients. However, the Lasso is not robust to outliers, because the Lasso method minimizes the sum of squared residual errors. Even though the least absolute deviatio...

2010
Yang Zhou Rong Jin Steven C. H. Hoi

We propose a novel group regularization which we call exclusive lasso. Unlike the group lasso regularizer that assumes covarying variables in groups, the proposed exclusive lasso regularizer models the scenario when variables in the same group compete with each other. Analysis is presented to illustrate the properties of the proposed regularizer. We present a framework of kernel based multi-tas...

Journal: :CoRR 2013
Martin Jaggi

We investigate the relation of two fundamental tools in machine learning, that is the support vector machine (SVM) for classification, and the Lasso technique used in regression. We show that the resulting optimization problems are equivalent, in the following sense: Given any instance of an l2-loss softmargin (or hard-margin) SVM, we construct a Lasso instance having the same optimal solutions...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید