A Probabilistic Incremental Proximal Gradient Method
نویسندگان
چکیده
منابع مشابه
Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence under Bregman Distance Growth Conditions
We introduce a unified algorithmic framework, called proximal-like incremental aggregated gradient (PLIAG) method, for minimizing the sum of smooth convex component functions and a proper closed convex regularization function that is possibly non-smooth and extendedvalued, with an additional abstract feasible set whose geometry can be captured by using the domain of a Legendre function. The PLI...
متن کاملAn optimal randomized incremental gradient method
In this paper, we consider a class of finite-sum convex optimization problems whose objective function is given by the summation of m (≥ 1) smooth components together with some other relatively simple terms. We first introduce a deterministic primal-dual gradient (PDG) method that can achieve the optimal black-box iteration complexity for solving these composite optimization problems using a pr...
متن کاملAn Accelerated Proximal Coordinate Gradient Method
We develop an accelerated randomized proximal coordinate gradient (APCG) method, for solving a broad class of composite convex optimization problems. In particular, our method achieves faster linear convergence rates for minimizing strongly convex functions than existing randomized proximal coordinate gradient methods. We show how to apply the APCG method to solve the dual of the regularized em...
متن کاملAlternating Proximal Gradient Method for Convex Minimization
In this paper, we propose an alternating proximal gradient method that solves convex minimization problems with three or more separable blocks in the objective function. Our method is based on the framework of alternating direction method of multipliers. The main computational effort in each iteration of the proposed method is to compute the proximal mappings of the involved convex functions. T...
متن کاملA Convergent Incremental Gradient Method with a Constant Step Size
Abstract. An incremental gradient method for minimizing a sum of continuously differentiable functions is presented. The method requires a single gradient evaluation per iteration and uses a constant step size. For the case that the gradient is bounded and Lipschitz continuous, we show that the method visits regions in which the gradient is small infinitely often. Under certain unimodality assu...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Signal Processing Letters
سال: 2019
ISSN: 1070-9908,1558-2361
DOI: 10.1109/lsp.2019.2926926