Rate Minimaxity of the Lasso and Dantzig Estimators
نویسندگان
چکیده
We consider the estimation of regression coefficients in a high-dimensional linear model. A lower bound of the minimax `q risk is provided for regression coefficients in `r balls, along with a minimax lower bound for the tail of the `q loss. Under certain conditions on the design matrix and penalty level, we prove that these minimax convergence rates are attained by both the Lasso and Dantzig estimators.
منابع مشابه
Rate Minimaxity of the Lasso and Dantzig Selector for the lq Loss in lr Balls
We consider the estimation of regression coefficients in a high-dimensional linear model. For regression coefficients in lr balls, we provide lower bounds for the minimax lq risk and minimax quantiles of the lq loss for all design matrices. Under an l0 sparsity condition on a target coefficient vector, we sharpen and unify existing oracle inequalities for the Lasso and Dantzig selector. We deri...
متن کاملSup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
Abstract: We derive the l∞ convergence rate simultaneously for Lasso and Dantzig estimators in a high-dimensional linear regression model under a mutual coherence assumption on the Gram matrix of the design and two different assumptions on the noise: Gaussian noise and general noise with finite variance. Then we prove that simultaneously the thresholded Lasso and Dantzig estimators with a prope...
متن کاملTransductive versions of the LASSO and the Dantzig Selector
Transductive methods are useful in prediction problems when the training dataset is composed of a large number of unlabeled observations and a smaller number of labeled observations. In this paper, we propose an approach for developing transductive prediction procedures that are able to take advantage of the sparsity in the high dimensional linear regression. More precisely, we define transduct...
متن کاملParallelism, uniqueness, and large-sample asymptotics for the Dantzig selector.
The Dantzig selector (Candès and Tao, 2007) is a popular ℓ1-regularization method for variable selection and estimation in linear regression. We present a very weak geometric condition on the observed predictors which is related to parallelism and, when satisfied, ensures the uniqueness of Dantzig selector estimators. The condition holds with probability 1, if the predictors are drawn from a co...
متن کاملDifferenced-Based Double Shrinking in Partial Linear Models
Partial linear model is very flexible when the relation between the covariates and responses, either parametric and nonparametric. However, estimation of the regression coefficients is challenging since one must also estimate the nonparametric component simultaneously. As a remedy, the differencing approach, to eliminate the nonparametric component and estimate the regression coefficients, can ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2009