Asymptotic Theory for Regularization: One-Dimensional Linear Case

نویسنده

  • Petri Koistinen
چکیده

The generalization ability of a neural network can sometimes be improved dramatically by regularization. To analyze the improvement one needs more refined results than the asymptotic distribution of the weight vector. Here we study the simple case of one-dimensional linear regression under quadratic regularization, i.e., ridge regression. We study the random design, misspecified case, where we derive expansions for the optimal regularization parameter and the ensuing improvement. It is possible to construct examples where it is best to use no regularization.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Asymptotic Equivalence of Regularization Methods in Thresholded Parameter Space

High-dimensional data analysis has motivated a spectrum of regularization methods for variable selection and sparse modeling, with two popular methods being convex and concave ones. A long debate has taken place on whether one class dominates the other, an important question both in theory and to practitioners. In this article, we characterize the asymptotic equivalence of regularization method...

متن کامل

Liquid – gas Coexistence Equilibrium in a Relaxation Model

Abstract We study stability of liquid-gas coexistence equilibrium in a relaxation model for isothermal phase transition in a sealed one-dimensional tube. With matched asymptotic expansion, we derive formally a linear system for first order perturbations. By solving this system analytically, it is shown that small initial perturbations are damped out in general; yet they may maintain at certain ...

متن کامل

Asymptotic Expansions for Regularization Methods of Linear Fully Implicit Differential-Algebraic Equations

Abstract. Differential-algebraic equations with a higher index can be approximated by regularization algorithms. One of such possibilities was introduced by März for linear time varying index 2 systems. In the present paper her approach is generalized to linear time varying index 3 systems. The structure of the regularized solutions and their convergence properties are characterized in terms of...

متن کامل

On High Dimensional Post-Regularization Prediction Intervals

This paper considers the construction of prediction intervals for future observations in high dimensional regression models. We propose a new approach to evaluate the uncertainty for estimating the mean parameter based on the widely-used penalization/regularization methods. The proposed method is then applied to construct prediction intervals for sparse linear models as well as sparse additive ...

متن کامل

Covariance Estimation: The GLM and Regularization Perspectives

Finding an unconstrained and statistically interpretable reparameterization of a covariance matrix is still an open problem in statistics. Its solution is of central importance in covariance estimation, particularly in the recent high-dimensional data environment where enforcing the positive-definiteness constraint could be computationally expensive. We provide a survey of the progress made in ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1997