نتایج جستجو برای: lasso
تعداد نتایج: 4548 فیلتر نتایج به سال:
The lasso [19] and group lasso [23] are popular algorithms in the signal processing and statistics communities. In signal processing, these algorithms allow for efficient sparse approximations of arbitrary signals in overcomplete dictionaries. In statistics, they facilitate efficient variable selection and reliable regression under the linear model assumption. In both cases, there is now ample ...
Both classical Forward Selection and the more modern Lasso provide computationally feasible methods for performing variable selection in high dimensional regression problems involving many predictors. We note that although the Lasso is the solution to an optimization problem while Forward Selection is purely algorithmic, the two methods turn out to operate in surprisingly similar fashions. Our ...
The lasso is a popular tool for sparse linear regression, especially for problems in which the number of variables p exceeds the number of observations n. But when p > n, the lasso criterion is not strictly convex, and hence it may not have a unique minimum. An important question is: when is the lasso solution well-defined (unique)? We review results from the literature, which show that if the ...
background and objectives: constant monitoring of healthcare organizations’ performance is an integral part of informed health policy-making. several hospital performance assessment methods have been proposed in the literature. pabon lasso model offers a fast and convenient method for comparative evaluation of hospital performance. this study aimed to evaluate the relative performance of hospit...
Group LASSO is widely used to enforce the structural sparsity, which achieves the sparsity at the inter-group level. In this paper, we propose a new formulation called “exclusive group LASSO”, which brings out sparsity at intra-group level in the context of feature selection. The proposed exclusive group LASSO is applicable on any feature structures, regardless of their overlapping or non-overl...
In this paper, we discuss a parsimonious approach to estimation of high-dimensional covariance matrices via the modified Cholesky decomposition with lasso. Two different methods are proposed. They are the equiangular and equi-sparse methods. We use simulation to compare the performance of the proposed methods with others available in the literature, including the sample covariance matrix, the b...
In this paper we use adaptive lasso estimator select between relevant and irrelevant instruments in heteroskedastic and non Gaussian data. To do so limit theory of Zou (2006) is extended from univariate iid case. Next, it is shown that adaptive lasso estimator can achieve near minimax risk bound even in the case of heteroskedastic data. To achieve that a new proof is used that benefits from Ste...
In the high-dimensional regression model a response variable is linearly related to p covariates, but the sample size n is smaller than p. We assume that only a small subset of covariates is ‘active’ (i.e., the corresponding coefficients are non-zero), and consider the model-selection problem of identifying the active covariates. A popular approach is to estimate the regression coefficients thr...
Lasso is a popular method for variable selection in regression. Much theoretical understanding has been obtained recently on its model selection or sparsity recovery properties under sparse and homoscedastic linear regression models. Since these standard model assumptions are often not met in practice, it is important to understand how Lasso behaves under nonstandard model assumptions. In this ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید