Sufficient Conditions for Uniform Stability of Regularization Algorithms

نویسندگان

  • Andre Wibisono
  • Lorenzo Rosasco
  • Tomaso Poggio
چکیده

In this paper, we study the stability and generalization properties of penalized empirical-risk minimization algorithms. We propose a set of properties of the penalty term that is sufficient to ensure uniform β-stability: we show that if the penalty function satisfies a suitable convexity property, then the induced regularization algorithm is uniformly β-stable. In particular, our results imply that regularization algorithms with penalty functions which are strongly convex on bounded domains are β-stable. In view of the results in [3], uniform stability implies generalization, and moreover, consistency results can be easily obtained. We apply our results to show that `p regularization for 1 < p ≤ 2 and elastic-net regularization are uniformly β-stable, and therefore generalize.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Permanence and Uniformly Asymptotic Stability of Almost Periodic Positive Solutions for a Dynamic Commensalism Model on Time Scales

In this paper, we study dynamic commensalism model with nonmonotic functional response, density dependent birth rates on time scales and derive sufficient conditions for the permanence. We also establish the existence and uniform asymptotic stability of unique almost periodic positive solution of the model by using Lyapunov functional method.

متن کامل

Generalization Bounds of Regularization Algorithms Derived Simultaneously through Hypothesis Space Complexity, Algorithmic Stability and Data Quality

A main issue in machine learning research is to analyze the generalization performance of a learning machine. Most classical results on the generalization performance of regularization algorithms are derived merely with the complexity of hypothesis space or the stability property of a learning algorithm. However, in practical applications, the performance of a learning algorithm is not actually...

متن کامل

Stability Conditions for Online Learnability

Stability is a general notion that quantifies the sensitivity of a learning algorithm’s output to small change in the training dataset (e.g. deletion or replacement of a single training sample). Such conditions have recently been shown to be more powerful to characterize learnability in the general learning setting under i.i.d. samples where uniform convergence is not necessary for learnability...

متن کامل

PSO-Optimized Blind Image Deconvolution for Improved Detectability in Poor Visual Conditions

Abstract: Image restoration is a critical step in many vision applications. Due to the poor quality of Passive Millimeter Wave (PMMW) images, especially in marine and underwater environment, developing strong algorithms for the restoration of these images is of primary importance. In addition, little information about image degradation process, which is referred to as Point Spread Function (PSF...

متن کامل

Stability and Generalization of Bipartite Ranking Algorithms

The problem of ranking, in which the goal is to learn a real-valued ranking function that induces a ranking or ordering over an instance space, has recently gained attention in machine learning. We study generalization properties of ranking algorithms, in a particular setting of the ranking problem known as the bipartite ranking problem, using the notion of algorithmic stability. In particular,...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009