Weak convergence of the regularization path in penalized M-estimation

نویسنده

  • JEAN-FRANÇOIS GERMAIN
چکیده

We consider an estimator b βn(t) defined as the element φ ∈ Φ minimizing a contrast process Λn(φ, t) for each t. We give some general results for deriving the weak convergence of √ n(b βn − β) in the space of bounded functions, where, for each t, β(t) is the φ ∈ Φ minimizing the limit of Λn(φ, t) as n → ∞. These results are applied in the context of penalized M-estimation, that is, when Λn(φ, t) = Mn(φ)+ tJn(φ), where Mn is a usual contrast process and Jn a penalty such as the l norm or the squared l norm. The function b βn is then called a regularization path. For instance we show that the central limit theorem established for the lasso estimator in Knight and Fu [2000] continues to hold in a functional sense for the regularization path. Other examples include various possible contrast processes for Mn such as those considered in Pollard [1985].

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimal Computational and Statistical Rates of Convergence for Sparse Nonconvex Learning Problems.

We provide theoretical analysis of the statistical and computational properties of penalized M-estimators that can be formulated as the solution to a possibly nonconvex optimization problem. Many important estimators fall in this category, including least squares regression with nonconvex regularization, generalized linear models with nonconvex regularization and sparse elliptical random design...

متن کامل

Penalized Bregman Divergence Estimation via Coordinate Descent

Variable selection via penalized estimation is appealing for dimension reduction. For penalized linear regression, Efron, et al. (2004) introduced the LARS algorithm. Recently, the coordinate descent (CD) algorithm was developed by Friedman, et al. (2007) for penalized linear regression and penalized logistic regression and was shown to gain computational superiority. This paper explores...

متن کامل

A path following algorithm for Sparse Pseudo-Likelihood Inverse Covariance Estimation (SPLICE)

Given n observations of a p-dimensional random vector, the covariance matrix and its inverse (precision matrix) are needed in a wide range of applications. Sample covariance (e.g. its eigenstructure) can misbehave when p is comparable to the sample size n. Regularization is often used to mitigate the problem. In this paper, we proposed an `1 penalized pseudo-likelihood estimate for the inverse ...

متن کامل

Generalized Linear Model Regression under Distance-to-set Penalties

Estimation in generalized linear models (GLM) is complicated by the presence of constraints. One can handle constraints by maximizing a penalized log-likelihood. Penalties such as the lasso are effective in high dimensions, but often lead to unwanted shrinkage. This paper explores instead penalizing the squared distance to constraint sets. Distance penalties are more flexible than algebraic and...

متن کامل

An Optimized Online Secondary Path Modeling Method for Single-Channel Feedback ANC Systems

This paper proposes a new method for online secondary path modeling in feedback active noise control (ANC) systems. In practical cases, the secondary path is usually time-varying. For these cases, online modeling of secondary path is required to ensure convergence of the system. In literature the secondary path estimation is usually performed offline, prior to online modeling, where in the prop...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009