Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity
نویسندگان
چکیده
We consider the convex-concave saddle point problem minx maxy f(x) + y>Ax− g(y) where f is smooth and convex and g is smooth and strongly convex. We prove that if the coupling matrix A has full column rank, the vanilla primaldual gradient method can achieve linear convergence even if f is not strongly convex. Our result generalizes previous work which either requires f and g to be quadratic functions or requires proximal mappings for both f and g. We adopt a novel analysis technique that in each iteration uses a “ghost” update as a reference, and show that the iterates in the primal-dual gradient method converge to this “ghost” sequence. Using the same technique we further give an analysis for the primal-dual stochastic variance reduced gradient (SVRG) method for convex-concave saddle point problems with a finite-sum structure.
منابع مشابه
Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms
We consider empirical risk minimization of linear predictors with convex loss functions. Such problems can be reformulated as convex-concave saddle point problems, and thus are well suitable for primal-dual first-order algorithms. However, primal-dual algorithms often require explicit strongly convex regularization in order to obtain fast linear convergence, and the required dual proximal mappi...
متن کاملStochastic Variance Reduction Methods for Policy Evaluation
Policy evaluation is a crucial step in many reinforcement-learning procedures, which estimates a value function that predicts states’ longterm value under a given policy. In this paper, we focus on policy evaluation with linear function approximation over a fixed dataset. We first transform the empirical policy evaluation problem into a (quadratic) convex-concave saddle point problem, and then ...
متن کاملRandomized Primal-Dual Proximal Block Coordinate Updates
In this paper we propose a randomized primal-dual proximal block coordinate updatingframework for a general multi-block convex optimization model with coupled objective functionand linear constraints. Assuming mere convexity, we establish its O(1/t) convergence rate interms of the objective value and feasibility measure. The framework includes several existingalgorithms as s...
متن کاملStochastic Parallel Block Coordinate Descent for Large-Scale Saddle Point Problems
We consider convex-concave saddle point problems with a separable structure and non-strongly convex functions. We propose an efficient stochastic block coordinate descent method using adaptive primal-dual updates, which enables flexible parallel optimization for large-scale problems. Our method shares the efficiency and flexibility of block coordinate descent methods with the simplicity of prim...
متن کاملStochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
We propose a stochastic extension of the primal-dual hybrid gradient algorithm studied by Chambolle and Pock in 2011 to solve saddle point problems that are separable in the dual variable. The analysis is carried out for general convex-concave saddle point problems and problems that are either partially smooth / strongly convex or fully smooth / strongly convex. We perform the analysis for arbi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1802.01504 شماره
صفحات -
تاریخ انتشار 2018