نتایج جستجو برای: stein type shrinkage lasso

تعداد نتایج: 1360847  

2017
Xiaoli Gao

Existing grouped variable selection methods rely heavily on prior group information, thus they may not be reliable if an incorrect group assignment is used. In this paper, we propose a family of shrinkage variable selection operators by controlling the k-th largest norm (KAN). The proposed KAN method exhibits some flexible group-wise variable selection naturally even though no correct prior gro...

2011
Miguel A. BELMONTE Gary KOOP Dimitris KOROBILIS Miguel A.G. BELMONTE

In this paper, we forecast EU-area inflation with many predictors using time-varying parameter models. The facts that time-varying parameter models are parameter-rich and the time span of our data is relatively short motivate a desire for shrinkage. In constant coefficient regression models, the Bayesian Lasso is gaining increasing popularity as an effective tool for achieving such shrinkage. I...

Journal: :CoRR 2016
Xingguo Li Jarvis D. Haupt Raman Arora Han Liu Mingyi Hong Tuo Zhao

Many statistical machine learning techniques sacrifice convenient computational structures to gain estimation robustness and modeling flexibility. In this paper, we study this fundamental tradeoff through a SQRT-Lasso problem for sparse linear regression and sparse precision matrix estimation in high dimensions. We explain how novel optimization techniques help address these computational chall...

2008
Rudolf Beran

The Stein estimator 's and the better positive-part Stein estimator gpS both dominate the sample mean, under quadratic loss, in the N(g, I) model of dimension q > 3. Standard large sample theory does not explaill this phenomenon well. Plausible bootstrap estimators for the risk of 's do not converge correctly at the shrinkage point as sample size n increases. By analyzing a submodel exactly, wi...

Journal: :IEICE Transactions 2006
Shinichi Nakajima Sumio Watanabe

In unidentifiable models, the Bayes estimation has the advantage of generalization performance over the maximum likelihood estimation. However, accurate approximation of the posterior distribution requires huge computational costs. In this paper, we consider an alternative approximation method, which we call a subspace Bayes approach. A subspace Bayes approach is an empirical Bayes approach whe...

2012
Liping Jing Michael K. Ng Tieyong Zeng

Microarray data profiles gene expression on a whole genome scale, therefore, it provides a good way to study associations between gene expression and occurrence or progression of cancer. More and more researchers realized that microarray data is helpful to predict cancer sample. However, the high dimension of gene expressions is much larger than the sample size, which makes this task very diffi...

Journal: :Neural networks : the official journal of the International Neural Network Society 2010
Junbin Gao Paul Wing Hing Kwan Daming Shi

Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two separate recent papers [Gao, J., Antolovich, M., & Kwan, P. H. (2008). L1 LASSO and its Bayesian inference. In W. Wobcke, & M. Zhang (Eds.), Lecture notes in computer science: Vol. 5360 (pp. 318-324); Wang, G., Yeung, D. Y., & Lochovsky, F. (2007). The kernel path in kernelized LASSO. In Internationa...

2012
Enrique Pinzón

This paper proposes a new two stage least squares (2SLS) estimator which is consistent and asymptotically normal in the presence of many weak and irrelevant instruments and heteroskedasticity. In the first stage the estimator uses an adaptive absolute shrinkage and selection operator (LASSO) that selects the relevant instruments with high probability. However, the adaptive LASSO estimates have ...

2009
Wook Yeon Hwang Hao Helen Zhang Subhashis Ghosal

We propose a new class of variable selection techniques for regression in high dimensional linear models based on a forward selection version of the LASSO, adaptive LASSO or elastic net, respectively to be called as forward iterative regression and shrinkage technique (FIRST), adaptive FIRST and elastic FIRST. These methods seem to work effectively for extremely sparse high dimensional linear m...

2014
Rahim Alhamzawi

Lasso methods are regularization and shrinkage methods widely used for subset selection and estimation in regression problems. From a Bayesian perspective, the Lasso-type estimate can be viewed as a Bayesian posterior mode when specifying independent Laplace prior distributions for the coefficients of independent variables (Park and Casella, 2008). A scale mixture of normal priors can also prov...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید