Sparsity Based Regularization

نویسندگان

  • Lorenzo Rosasco
  • Ioannis Gkioulekas
چکیده

In previous lectures, we saw how regularization can be used to restore the well-posedness of the empirical risk minimization (ERM) problem. We also derived algorithms that use regularization to impose smoothness assumptions on the solution space (as in the case of Tikhonov regularization) or introduce additional structure by confining the solution space to low dimensional manifolds (manifold regularization). In this lecture, we will examine the use of regularization for the achievement of an alternative objective, namely sparsity. During the last ten years, there has been an increased interest in the general field of sparsity. Such interest comes not only from the Machine Learning community, but also from other scientific areas. For example, in Signal Processing sparsity is examined mainly in the context of compressive sensing [CRT06, Dono06] and the so called basis pursuit [CDS96]. In the Statistics literature, basis pursuit is known as the lasso [Tibs96]. Strong connections also exist with sparse coding [OlFi97] and independent component analysis [HyOj00]. In these notes, we discuss sparsity from a regularization point of view, and only refer to these connections as they arise from within this framework. Initially, we motivate the use of sparsity and emphasize on the problem of variable selection. Then, we present the formulation of the sparsity based regularization problem, develop tractable approximations to it, and justify them using a geometric interpretation of sparsity. Finally, after discussion of some of the properties of these approximations, we describe an algorithm for the solution of the sparsity based regularization problem.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

3D Inversion of Magnetic Data through Wavelet based Regularization Method

This study deals with the 3D recovering of magnetic susceptibility model by incorporating the sparsity-based constraints in the inversion algorithm. For this purpose, the area under prospect was divided into a large number of rectangular prisms in a mesh with unknown susceptibilities. Tikhonov cost functions with two sparsity functions were used to recover the smooth parts as well as the sharp ...

متن کامل

Fast Iteratively Reweighted Least Squares Algorithms for Analysis-Based Sparsity Reconstruction

In this paper, we propose a novel algorithm for analysis-based sparsity reconstruction. It can solve the generalized problem by structured sparsity regularization with an orthogonal basis and total variation regularization. The proposed algorithm is based on the iterative reweighted least squares (IRLS) model, which is further accelerated by the preconditioned conjugate gradient method. The con...

متن کامل

Improvement of image quality of time-domain diffuse optical tomography with lp sparsity regularization

An l(p) (0 < p ≤ 1) sparsity regularization is applied to time-domain diffuse optical tomography with a gradient-based nonlinear optimization scheme to improve the spatial resolution and the robustness to noise. The expression of the l(p) sparsity regularization is reformulated as a differentiable function of a parameter to avoid the difficulty in calculating its gradient in the optimization pr...

متن کامل

Sparsity Regularization for Radon Measures

In this paper we establish a regularization method for Radon measures. Motivated from sparse L regularization we introduce a new regularization functional for the Radon norm, whose properties are then analyzed. We, furthermore, show well-posedness of Radon measure based sparsity regularization. Finally we present numerical examples along with the underlying algorithmic and implementation detail...

متن کامل

Combined Group and Exclusive Sparsity for Deep Neural Networks

The number of parameters in a deep neural network is usually very large, which helps with its learning capacity but also hinders its scalability and practicality due to memory/time inefficiency and overfitting. To resolve this issue, we propose a sparsity regularization method that exploits both positive and negative correlations among the features to enforce the network to be sparse, and at th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008