A Trust Region Method for Nonsmooth Convex Optimization
نویسندگان
چکیده
We propose an iterative method that solves a nonsmooth convex optimization problem by converting the original objective function to a once continuously differentiable function by way of Moreau-Yosida regularization. The proposed method makes use of approximate function and gradient values of the MoreauYosida regularization instead of the corresponding exact values. Under this setting, Fukushima and Qi (1996) and Rauf and Fukushima (2000) proposed a proximal Newton method and a proximal BFGS method, respectively, for nonsmooth convex optimization. While these methods employ a line search strategy to achieve global convergence, the method proposed in this paper uses a trust region strategy. We establish global and superlinear convergence of the method under appropriate assumptions.
منابع مشابه
A modified Polak-Ribière-Polyak conjugate gradient algorithm for nonsmooth convex programs
The conjugate gradient (CG) method is one of the most popular methods for solving smooth unconstrained optimization problems due to its simplicity and low memory requirement. However, the usage of CG methods are mainly restricted in solving smooth optimization problems so far. The purpose of this paper is to present efficient conjugate gradient-type methods to solve nonsmooth optimization probl...
متن کاملA DC piecewise affine model and a bundling technique in nonconvex nonsmooth minimization
We introduce an algorithm to minimize a function of several variables with no convexity nor smoothness assumptions. The main peculiarity of our approach is the use of an the objective function model which is the difference of two piecewise affine convex functions. Bundling and trust region concepts are embedded into the algorithm. Convergence of the algorithm to a stationary point is proved.
متن کاملOn Sequential Optimality Conditions without Constraint Qualifications for Nonlinear Programming with Nonsmooth Convex Objective Functions
Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Here, nonsmooth approximate gradient projection and complementary approximate Karush-Kuhn-Tucker conditions are presented. These sequential optimality conditions are satisfied by local minimizers of optimization problems independently of the fulfillment of constrai...
متن کاملAn efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems
Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...
متن کاملOptimality conditions for Pareto efficiency and proper ideal point in set-valued nonsmooth vector optimization using contingent cone
In this paper, we first present a new important property for Bouligand tangent cone (contingent cone) of a star-shaped set. We then establish optimality conditions for Pareto minima and proper ideal efficiencies in nonsmooth vector optimization problems by means of Bouligand tangent cone of image set, where the objective is generalized cone convex set-valued map, in general real normed spaces.
متن کامل