Semi-smooth Second-order Type Methods for Composite Convex Programs
نویسندگان
چکیده
The goal of this paper is to study approaches to bridge the gap between first-order and second-order type methods for composite convex programs. Our key observations are: i) Many well-known operator splitting methods, such as forward-backward splitting (FBS) and Douglas-Rachford splitting (DRS), actually define a possibly semi-smooth and monotone fixed-point mapping; ii) The optimal solutions of the composite convex program and the solutions of the system of nonlinear equations derived from the fixed-point mapping are equivalent. Solving the system of nonlinear equations rediscovers a paradigm on developing second-order methods. Although these fixedpoint mappings may not be differentiable, they are often semi-smooth and its generalized Jacobian matrix is positive semidefinite due to monotonicity. By combining a regularization approach and a known hyperplane projection technique, we propose an adaptive semi-smooth Newton method and establish its convergence to global optimality. A semi-smooth Levenberg-Marquardt (LM) method in terms of handling the nonlinear least squares formulation is further presented. In practice, the second-order methods can be activated until the first-order type methods reach a good neighborhood of the global optimal solution. Preliminary numerical results on Lasso regression, logistic regression, basis pursuit, linear programming and quadratic programming demonstrate that our second-order type algorithms are able to achieve quadratic or superlinear convergence as long as the fixed-point residual of the initial point is small enough.
منابع مشابه
On Second-order Properties of the Moreau-Yosida Regularization for Constrained Nonsmooth Convex Programs
In this paper, we attempt to investigate a class of constrained nonsmooth convex optimization problems, that is, piecewise C2 convex objectives with smooth convex inequality constraints. By using the Moreau-Yosida regularization, we convert these problems into unconstrained smooth convex programs. Then, we investigate the second-order properties of the Moreau-Yosida regularization η. By introdu...
متن کاملMirror Prox algorithm for multi-term composite minimization and semi-separable problems
In the paper, we develop a composite version of Mirror Prox algorithm for solving convexconcave saddle point problems and monotone variational inequalities of special structure, allowing to cover saddle point/variational analogies of what is usually called “composite minimization” (minimizing a sum of an easy-to-handle nonsmooth and a general-type smooth convex functions “as if” there were no n...
متن کاملConditional gradient type methods for composite nonlinear and stochastic optimization
In this paper, we present a conditional gradient type (CGT) method for solving a class of composite optimization problems where the objective function consists of a (weakly) smooth term and a strongly convex term. While including this strongly convex term in the subproblems of the classical conditional gradient (CG) method improves its convergence rate for solving strongly convex problems, it d...
متن کاملGeneric identifiability and second-order sufficiency in tame convex optimization
We consider linear optimization over a fixed compact convex feasible region that is semi-algebraic (or, more generally, “tame”). Generically, we prove that the optimal solution is unique and lies on a unique manifold, around which the feasible region is “partly smooth”, ensuring finite identification of the manifold by many optimization algorithms. Furthermore, second-order optimality condition...
متن کاملLinear Rate Convergence of the Alternating Direction Method of Multipliers for Convex Composite Quadratic and Semi-Definite Programming
In this paper, we aim to provide a comprehensive analysis on the linear rate convergence of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex composite optimization problems. Under a certain error bound condition, we establish the global linear rate of convergence for a more general semi-proximal ADMM with the dual steplength being restricted to be i...
متن کامل