On lower iteration complexity bounds for the convex concave saddle point problems
نویسندگان
چکیده
In this paper, we study the lower iteration complexity bounds for finding saddle point of a strongly convex and concave problem: $\min_x\max_yF(x,y)$. We restrict classes algorithms in our investigation to be either pure first-order methods or using proximal mappings. The existing bound result type problems is obtained via framework monotone variational inequality problems, which corresponds case where gradient Lipschitz constants ($L_x, L_y$ $L_{xy}$) strong convexity/concavity ($\mu_x$ $\mu_y$) are uniform with respect variables $x$ $y$. However, specific min-max problem these parameters naturally different. Therefore, one led best possible bounds, models. paper present following results. For class algorithms, $\Omega\left(\sqrt{\frac{L_x}{\mu_x}+\frac{L_{xy}^2}{\mu_x\mu_y}+\frac{L_y}{\mu_y}}\cdot\ln\left(\frac{1}{\epsilon}\right)\right)$, term $\frac{L_{xy}^2}{\mu_x\mu_y}$ explains how coupling influences complexity. Under several special parameter regimes, has been achieved by corresponding optimal algorithms. whether not under general regime remains open. Additionally, bilinear given availability certain operators, $\Omega\left(\sqrt{\frac{L_{xy}^2}{\mu_x\mu_y}+1}\cdot\ln(\frac{1}{\epsilon})\right)$ established have already developed literature.
منابع مشابه
On generalized SSOR-like iteration method for saddle point problems
In this paper, we study the iterative algorithms for saddle point problems(SPP). We present a new symmetric successive over-relaxation method with three parameters, which is the extension of the SSOR iteration method. Under some suitable conditions, we give the convergence results. Numerical examples further confirm the correctness of the theory and the effectiveness of the method. Key–Words: i...
متن کاملSaddle Point Seeking for Convex Optimization Problems
In this paper, we consider convex optimization problems with constraints. By combining the idea of a Lie bracket approximation for extremum seeking systems and saddle point algorithms, we propose a feedback which steers a single-integrator system to the set of saddle points of the Lagrangian associated to the convex optimization problem. We prove practical uniform asymptotic stability of the se...
متن کاملOn the convergence of conditional epsilon-subgradient methods for convex programs and convex-concave saddle-point problems
The paper provides two contributions. First, we present new convergence results for conditional e-subgradient algorithms for general convex programs. The results obtained here extend the classical ones by Polyak [Sov. Math. Doklady 8 (1967) 593; USSR Comput. Math. Math. Phys. 9 (1969) 14; Introduction to Optimization, Optimization Software, New York, 1987] as well as the recent ones in [Math. P...
متن کاملA simple algorithm for a class of nonsmooth convex-concave saddle-point problems
This supplementary material includes numerical examples demonstrating the flexibility and potential of the algorithm PAPC developed in the paper. We show that PAPC does behave numerically as predicted by the theory, and can efficiently solve problems which cannot be solved by well known state of the art algorithms sharing the same efficiency estimate. Here for illustration purposes, we compare ...
متن کاملPreconditioned Douglas-Rachford Splitting Methods for Convex-concave Saddle-point Problems
We propose a preconditioned version of the Douglas-Rachford splitting method for solving convexconcave saddle-point problems associated with Fenchel-Rockafellar duality. It allows to use approximate solvers for the linear subproblem arising in this context. We prove weak convergence in Hilbert space under minimal assumptions. In particular, various efficient preconditioners are introduced in th...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Mathematical Programming
سال: 2021
ISSN: ['0025-5610', '1436-4646']
DOI: https://doi.org/10.1007/s10107-021-01660-z