Eventual linear convergence of the Douglas Rachford iteration for basis pursuit

نویسندگان

  • Laurent Demanet
  • Xiangxiong Zhang
چکیده

We provide a simple analysis of the Douglas-Rachford splitting algorithm in the context of ` minimization with linear constraints, and quantify the asymptotic linear convergence rate in terms of principal angles between relevant vector spaces. In the compressed sensing setting, we show how to bound this rate in terms of the restricted isometry constant. More general iterative schemes obtained by `-regularization and over-relaxation including the dual split Bregman method [24] are also treated. We make no attempt at characterizing the transient regime preceding the onset of linear convergence. Acknowledgments: The authors are grateful to Jalal Fadili, Stanley Osher, Gabriel Peyré, Ming Yan, Yi Yang and Wotao Yin for discussions on modern methods of optimization that were very instructive to us. The authors are supported by the National Science Foundation and the Alfred P. Sloan Foundation.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Cyclic Douglas-Rachford Iteration Scheme

In this paper we present two Douglas–Rachford inspired iteration schemes which can be applied directly to N-set convex feasibility problems in Hilbert space. Our main results are weak convergence of the methods to a point whose nearest point projections onto each of the N sets coincide. For affine subspaces, convergence is in norm. Initial results from numerical experiments, comparing our metho...

متن کامل

The Douglas-Rachford Algorithm for Weakly Convex Penalties

The Douglas-Rachford algorithm is widely used in sparse signal processing for minimizing a sum of two convex functions. In this paper, we consider the case where one of the functions is weakly convex but the other is strongly convex so that the sum is convex. We provide a condition that ensures the convergence of the same Douglas-Rachford iterations, provided that the strongly convex function i...

متن کامل

A note on the ergodic convergence of symmetric alternating proximal gradient method

We consider the alternating proximal gradient method (APGM) proposed to solve a convex minimization model with linear constraints and separable objective function which is the sum of two functions without coupled variables. Inspired by Peaceman-Rachford splitting method (PRSM), a nature idea is to extend APGM to the symmetric alternating proximal gradient method (SAPGM), which can be viewed as ...

متن کامل

On convergence rate of the Douglas-Rachford operator splitting method

This note provides a simple proof on a O(1/k) convergence rate for the DouglasRachford operator splitting method where k denotes the iteration counter.

متن کامل

Lecture 20 : Splitting Algorithms

In this lecture, we discuss splitting algorithms for convex minimization problems with objective given by the sum of two nonsmooth functions. We start with the fixed point property of such problems and derive a general scheme of splitting algorithm based on fixed point iteration. This covers Douglas-Rachford splitting and Peaceman-Rachford splitting algorithms. We also discuss the convergence r...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Math. Comput.

دوره 85  شماره 

صفحات  -

تاریخ انتشار 2016