Complexity of Variants of Tseng's Modified F-B Splitting and Korpelevich's Methods for Hemivariational Inequalities with Applications to Saddle-point and Convex Optimization Problems
نویسندگان
چکیده
In this paper, we consider both a variant of Tseng’s modified forward-backward splitting method and an extension of Korpelevich’s method for solving hemivariational inequalities with Lipschitz continuous operators. By showing that these methods are special cases of the hybrid proximal extragradient method introduced by Solodov and Svaiter, we derive iteration-complexity bounds for them to obtain different types of approximate solutions. In the context of saddle-point problems, we also derive complexity bounds for these methods to obtain another type of an approximate solution, namely, that of an approximate saddle point. Finally, we illustrate the usefulness of the above results by applying them to a large class of linearly constrained convex programming problems, including, for example, cone programming and problems whose objective functions converge to infinity as the boundaries of their effective domains are approached.
منابع مشابه
Complexity of variants of Tseng’s modified F-B splitting and Korpelevich’s methods for generalized variational inequalities with applications to saddle point and convex optimization problems
In this paper, we consider both a variant of Tseng’s modified forward-backward splitting method and an extension of Korpelevich’s method for solving generalized variational inequalities with Lipschitz continuous operators. By showing that these methods are special cases of the hybrid proximal extragradient (HPE) method introduced by Solodov and Svaiter, we derive iteration-complexity bounds for...
متن کاملAn efficient modified neural network for solving nonlinear programming problems with hybrid constraints
This paper presents the optimization techniques for solving convex programming problems with hybrid constraints. According to the saddle point theorem, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalleinvariance principle, a neural network model is constructed. The equilibrium point of the proposed model is proved to be equivalent to the optima...
متن کاملAn Interior Point Algorithm for Solving Convex Quadratic Semidefinite Optimization Problems Using a New Kernel Function
In this paper, we consider convex quadratic semidefinite optimization problems and provide a primal-dual Interior Point Method (IPM) based on a new kernel function with a trigonometric barrier term. Iteration complexity of the algorithm is analyzed using some easy to check and mild conditions. Although our proposed kernel function is neither a Self-Regular (SR) fun...
متن کاملConvergence rate of inexact proximal point methods with relative error criteria for convex optimization
In this paper, we consider a framework of inexact proximal point methods for convex optimization that allows a relative error tolerance in the approximate solution of each proximal subproblem and establish its convergence rate. We then show that the well-known forward-backward splitting algorithm for convex optimization belongs to this framework. Finally, we propose and establish the iteration-...
متن کاملModified Convex Data Clustering Algorithm Based on Alternating Direction Method of Multipliers
Knowing the fact that the main weakness of the most standard methods including k-means and hierarchical data clustering is their sensitivity to initialization and trapping to local minima, this paper proposes a modification of convex data clustering in which there is no need to be peculiar about how to select initial values. Due to properly converting the task of optimization to an equivalent...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- SIAM Journal on Optimization
دوره 21 شماره
صفحات -
تاریخ انتشار 2011