Half-Quadratic Minimization of Regularized Objectives as the Fixed Point Iteration for the Linearized Gradient

نویسندگان

  • Mila Nikolova
  • Raymond Chan
چکیده

We focus on the minimization of regularized objective functions using the popular half-quadratic approach introduced by Geman and Reynolds in 1992. We show that whenever applicable, this approach is equivalent to the very classical gradient linearization approach, known also as the fixed point iteration.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Conjugate gradient acceleration of iteratively re-weighted least squares methods

Iteratively Re-weighted Least Squares (IRLS) is a method for solving minimization problems involving non-quadratic cost functions, perhaps non-convex and non-smooth, which however can be described as the infimum over a family of quadratic functions. This transformation suggests an algorithmic scheme that solves a sequence of quadratic problems to be tackled efficiently by tools of numerical lin...

متن کامل

Block-triangular Preconditioners for Systems Arising from Edge-preserving Image Restoration

Signal and image restoration problems are often solved by minimizing a cost function consisting of an `2 data-fidelity term and a regularization term. We consider a class of convex and edge-preserving regularization functions. In specific, half-quadratic regularization as a fixed-point iteration method is usually employed to solve this problem. The main aim of this paper is to solve the above-d...

متن کامل

A Comparison of Multilevel Methods for Total Variation

We consider numerical methods for solving problems involving total variation (TV) regularization for semidefinite quadratic minimization problems minu ‖Ku−z‖2 arising from illposed inverse problems. HereK is a compact linear operator, and z is data containing inexact or partial information about the “true” u. TV regularization entails adding to the objective function a penalty term which is a s...

متن کامل

A Comparison of Multilevel Methods for Total Variation Regularization

We consider numerical methods for solving problems involving total variation (TV) regularization for semidefinite quadratic minimization problems minu ‖Ku−z‖22 arising from illposed inverse problems. Here K is a compact linear operator, and z is data containing inexact or partial information about the “true” u. TV regularization entails adding to the objective function a penalty term which is a...

متن کامل

روش به روز رسانی متقارن از مرتبه اول برای حل مسایل بهینه سازی مقیاس بزرگ

The search for finding the local minimization in unconstrained optimization problems and a fixed point of the gradient system of ordinary differential equations are two close problems. Limited-memory algorithms are widely used to solve large-scale problems, while Rang Kuta's methods are also used to solve numerical differential equations. In this paper, using the concept of sub-space method and...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006