Generalizing the Optimized Gradient Method for Smooth Convex Minimization
نویسندگان
چکیده
منابع مشابه
Alternating Proximal Gradient Method for Convex Minimization
In this paper, we propose an alternating proximal gradient method that solves convex minimization problems with three or more separable blocks in the objective function. Our method is based on the framework of alternating direction method of multipliers. The main computational effort in each iteration of the proposed method is to compute the proximal mappings of the involved convex functions. T...
متن کاملOptimized first-order methods for smooth convex minimization
We introduce new optimized first-order methods for smooth unconstrained convex minimization. Drori and Teboulle [5] recently described a numerical method for computing the N-iteration optimal step coefficients in a class of first-order algorithms that includes gradient methods, heavy-ball methods [15], and Nesterov's fast gradient methods [10,12]. However, the numerical method in [5] is computa...
متن کاملA coordinate gradient descent method for ℓ1-regularized convex minimization
In applications such as signal processing and statistics, many problems involve finding sparse solutions to under-determined linear systems of equations. These problems can be formulated as a structured nonsmooth optimization problems, i.e., the problem of minimizing `1-regularized linear least squares problems. In this paper, we propose a block coordinate gradient descent method (abbreviated a...
متن کاملA Bundle Method for Solving Convex Non-smooth Minimization Problems
Numerical experiences show that bundle methods are very efficient for solving convex non-smooth optimization problems. In this paper we describe briefly the mathematical background of a bundle method and discuss practical aspects for the numerical implementation. Further, we give a detailed documentation of our implementation and report about numerical tests.
متن کاملThe Newton Bracketing Method for Convex Minimization
An iterative method for the minimization of convex functions f : R → R, called a Newton Bracketing (NB) method, is presented. The NB method proceeds by using Newton iterations to improve upper and lower bounds on the minimum value. The NB method is valid for n = 1, and in some cases for n > 1 (sufficient conditions given here). The NB method is applied to large scale Fermat–Weber location probl...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SIAM Journal on Optimization
سال: 2018
ISSN: 1052-6234,1095-7189
DOI: 10.1137/17m112124x