نتایج جستجو برای: روش انقباضی lasso
تعداد نتایج: 374444 فیلتر نتایج به سال:
Variable selection is a topic of great importance in high-dimensional statistical modeling and has a wide range of real-world applications. Many variable selection techniques have been proposed in the context of linear regression, and the Lasso model is probably one of the most popular penalized regression techniques. In this paper, we propose a new, fully hierarchical, Bayesian version of the ...
While considerable advances have been made in estimating high-dimensional structured models from independent data using Lasso-type models, limited progress has been made for settings when the samples are dependent. We consider estimating structured VAR (vector auto-regressive model), where the structure can be captured by any suitable norm, e.g., Lasso, group Lasso, order weighted Lasso, etc. I...
In this project, we discuss high-dimensional regression, where the dimension of the multivariate distribution is larger than the sample size, i.e. d n. With the assumption of sparse structure of the underlying multivariate distribution, we take the advantage of the `1 regularized method for parameter estimation. There are two major problems that will be discussed in this project: (1) a family o...
We consider the group lasso penalty for the linear model. We note that the standard algorithm for solving the problem assumes that the model matrices in each group are orthonormal. Here we consider a more general penalty that blends the lasso (L1) with the group lasso (“two-norm”). This penalty yields solutions that are sparse at both the group and individual feature levels. We derive an effici...
We present a data dependent generalization bound for a large class of regularized algorithms which implement structured sparsity constraints. The bound can be applied to standard squared-norm regularization, the Lasso, the group Lasso, some versions of the group Lasso with overlapping groups, multiple kernel learning and other regularization schemes. In all these cases competitive results are o...
we provide history of lasso, and see new ventures talk about key concept debiased lasso. Lasso provided a good fit through sparse regression but did not deliver standard errors. The lasso delivers.
Under the background of big data era today, once been widely used method – multiple linear regressions can not satisfy people’s need to handle big data any more because of its bad characteristics such as multicollinearity, instability, subjectivity in model chosen etc. Contrary to MLR, LASSO method has many good natures. it is stable and can handle multicollinearity and successfully select the ...
We propose a self-tuning √ Lasso method that simultaneously resolves three important practical problems in high-dimensional regression analysis, namely it handles the unknown scale, heteroscedasticity and (drastic) non-Gaussianity of the noise. In addition, our analysis allows for badly behaved designs, for example, perfectly collinear regressors, and generates sharp bounds even in extreme case...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید