Excessive Gap Technique in Nonsmooth Convex Minimization

نویسنده

  • Yurii Nesterov
چکیده

In this paper we introduce a new primal-dual technique for convergence analysis of gradient schemes for nonsmooth convex optimization. As an example of its application, we derive a primal-dual gradient method for a special class of structured nonsmooth optimization problems, which ensures a rate of convergence of order O( 1 k ), where k is the iteration count. Another example is a gradient scheme, which minimizes a nonsmooth strongly convex function with known structure with rate of convergence O( 1 k2 ). In both cases the efficiency of the methods is higher than the corresponding black-box lower complexity bounds by an order of magnitude.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Constrained convex minimization via model-based excessive gap

We introduce a model-based excessive gap technique to analyze first-order primaldual methods for constrained convex minimization. As a result, we construct new primal-dual methods with optimal convergence rates on the objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox-function selection strategy, our framework subsumes the augmented...

متن کامل

Convergence analysis of the Peaceman-Rachford splitting method for nonsmooth convex optimization

In this paper, we focus on the convergence analysis for the application of the PeacemanRachford splitting method to a convex minimization model whose objective function is the sum of a smooth and nonsmooth convex functions. The sublinear convergence rate in term of the worst-case O(1/t) iteration complexity is established if the gradient of the smooth objective function is assumed to be Lipschi...

متن کامل

An efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems

Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...

متن کامل

Canonical Dual Transformation Method and Generalized Triality Theory in Nonsmooth Global Optimization

This paper presents, within a unified framework, a potentially powerful canonical dual transformation method and associated generalized duality theory in nonsmooth global optimization. It is shown that by the use of this method, many nonsmooth/nonconvex constrained primal problems in Rn can be reformulated into certain smooth/convex unconstrained dual problems in Rm with m 6 n and without duali...

متن کامل

Smoothing technique for nonsmooth composite minimization with linear operator

We introduce and analyze an algorithm for the minimization of convex functions that are the sum of differentiable terms and proximable terms composed with linear operators. The method builds upon the recently developed smoothed gap technique. In addition to a precise convergence rate result, valid even in the presence of linear inclusion constraints, this new method allows an explicit treatment...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • SIAM Journal on Optimization

دوره 16  شماره 

صفحات  -

تاریخ انتشار 2005