Radial Subgradient Descent
نویسنده
چکیده
We present a subgradient method for minimizing non-smooth, non-Lipschitz convex optimization problems. The only structure assumed is that a strictly feasible point is known. We extend the work of Renegar [1] by taking a different perspective, leading to an algorithm which is conceptually more natural, has notably improved convergence rates, and for which the analysis is surprisingly simple. At each iteration, the algorithm takes a subgradient step and then performs a line search to move radially towards (or away from) the known feasible point. Our convergence results have striking similarities to those of traditional methods that require Lipschitz continuity. Costly orthogonal projections typical of subgradient methods are entirely avoided.
منابع مشابه
A new Levenberg-Marquardt approach based on Conjugate gradient structure for solving absolute value equations
In this paper, we present a new approach for solving absolute value equation (AVE) whichuse Levenberg-Marquardt method with conjugate subgradient structure. In conjugate subgradientmethods the new direction obtain by combining steepest descent direction and the previous di-rection which may not lead to good numerical results. Therefore, we replace the steepest descentdir...
متن کاملOn Stochastic Subgradient Mirror-Descent Algorithm with Weighted Averaging
This paper considers stochastic subgradient mirror-descent method for solving constrained convex minimization problems. In particular, a stochastic subgradient mirror-descent method with weighted iterate-averaging is investigated and its per-iterate convergence rate is analyzed. The novel part of the approach is in the choice of weights that are used to construct the averages. Through the use o...
متن کاملConvergence Rates for Deterministic and Stochastic Subgradient Methods Without Lipschitz Continuity
We extend the classic convergence rate theory for subgradient methods to apply to non-Lipschitz functions. For the deterministic projected subgradient method, we present a global O(1/ √ T ) convergence rate for any convex function which is locally Lipschitz around its minimizers. This approach is based on Shor’s classic subgradient analysis and implies generalizations of the standard convergenc...
متن کاملHedge Algorithm and Subgradient Methods
We show that the Hedge Algorithm, a method widely used in Machine Learning, can be interpreted as a particular subgradient algorithm for minimizing a well-chosen convex function, namely a Mirror Descent Scheme. Using this reformulation, we can improve slightly the worstcase convergence guarantees of the Hedge Algorithm. Recently, Nesterov has introduced the class of Primal-Dual Subgradient Algo...
متن کاملContinuous optimization, an introduction
2 (First order) Descent methods, rates 2 2.1 Gradient descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.2 What can we achieve? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Second order methods: Newton’s method . . . . . . . . . . . . . . . . . 7 2.4 Multistep first order methods . . . . . . . . . . . . . . . . . . . . . . . . 8 2.4.1 Heavy ball method . ...
متن کامل