"Efficient" Subgradient Methods for General Convex Optimization
نویسنده
چکیده
A subgradient method is presented for solving general convex optimization problems, the main requirement being that a strictly-feasible point is known. A feasible sequence of iterates is generated, which converges to within user-specified error of optimality. Feasibility is maintained with a linesearch at each iteration, avoiding the need for orthogonal projections onto the feasible region (an operation that limits practicality of traditional subgradient methods). Lipschitz continuity is not required, yet the algorithm is shown to possess a convergence rate analogous to rates for traditional methods, albeit with error measured relatively, whereas traditionally error has been absolute. The algorithm is derived using an elementary framework that can be utilized to design other such algorithms.
منابع مشابه
Ergodic Results in Subgradient Optimization
Subgradient methods are popular tools for nonsmooth, convex minimization , especially in the context of Lagrangean relaxation; their simplicity has been a main contribution to their success. As a consequence of the nonsmoothness, it is not straightforward to monitor the progress of a subgradient method in terms of the approximate fulllment of optimality conditions, since the subgradients used i...
متن کاملLecture 2: Subgradient Methods
In this lecture, we discuss first order methods for the minimization of convex functions. We focus almost exclusively on subgradient-based methods, which are essentially universally applicable for convex optimization problems, because they rely very little on the structure of the problem being solved. This leads to effective but slow algorithms in classical optimization problems, however, in la...
متن کاملMultiple Cuts in Separating Plane Algorithms
This paper presents an extended version of the separation plane algorithms for subgradientbased finite-dimensional nondifferentiable convex blackbox optimization. The extension introduces additional cuts for epigraph of the conjugate of objective function which improve the convergence of the algorithm. The case of affine cuts is considered in more details and it is shown that it requires soluti...
متن کاملOn Stochastic Subgradient Mirror-Descent Algorithm with Weighted Averaging
This paper considers stochastic subgradient mirror-descent method for solving constrained convex minimization problems. In particular, a stochastic subgradient mirror-descent method with weighted iterate-averaging is investigated and its per-iterate convergence rate is analyzed. The novel part of the approach is in the choice of weights that are used to construct the averages. Through the use o...
متن کاملIncremental Subgradients for Constrained Convex Optimization: A Unified Framework and New Methods
We present a unifying framework for nonsmooth convex minimization bringing together -subgradient algorithms and methods for the convex feasibility problem. This development is a natural step for -subgradient methods in the direction of constrained optimization since the Euclidean projection frequently required in such methods is replaced by an approximate projection, which is often easier to co...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- SIAM Journal on Optimization
دوره 26 شماره
صفحات -
تاریخ انتشار 2016