On the conditional acceptance of iterates in SAO algorithms based on convex separable approximations
نویسندگان
چکیده
We reflect on the convergence and termination of optimization algorithms based on convex and separable approximations using two recently proposed strategies, namely a trust region with filtered acceptance of the iterates, and conservatism. We then propose a new strategy for convergence and termination, denoted f iltered conservatism, in which the acceptance or rejection of an iterate is determined using the nonlinear acceptance filter. However, if an iterate is rejected, we increase the conservatism of every unconservative approximation, rather than reducing the trust region. Filtered conservatism aims to combine the salient features of trust region strategies with nonlinear acceptance filters on the one hand, and conservatism on the other. In filtered conservatism, the nonlinear acceptance filter is used to decide if an iterate is accepted or rejected. This allows for the acceptance of infeasible iterates, which would not be accepted in a method based on conservatism. If however an iterate is rejected, the trust region need not be decreased; it may be kept constant. Convergence is than effected by increasing the conservatism of only the unconservative approximations in the (large, constant) trust region, until the iterate becomes acceptable to the filter. Numerical results corroborate the accuracy and robustness of the method. Based on the paper entitled ‘Globally convergent SAO algorithms for large scale simulation-based optimization’, presented at the 8th World Congress on Structural and Multidisciplinary Optimization, 1–5 June 2009, Lisbon, Portugal. A. A. Groenwold (B) Department of Mechanical Engineering, University of Stellenbosch, Matieland, South Africa e-mail: [email protected] L. F. P. Etman Department of Mechanical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
منابع مشابه
Primal-Dual Stochastic Hybrid Approximation Algorithm
A new algorithm for solving convex stochastic optimization problems with expectation functions in both the objective and constraints is presented. The algorithm combines a stochastic hybrid procedure, which was originally designed to solve problems with expectation only in the objective, with dual stochastic gradient ascent. More specifically, the algorithm generates primal iterates by minimizi...
متن کاملConvergence theorems of implicit iterates with errors for generalized asymptotically quasi-nonexpansive mappings in Banach spaces
In this paper, we prove that an implicit iterative process with er-rors converges strongly to a common xed point for a nite family of generalizedasymptotically quasi-nonexpansive mappings on unbounded sets in a uniformlyconvex Banach space. Our results unify, improve and generalize the correspond-ing results of Ud-din and Khan [4], Sun [21], Wittman [23], Xu and Ori [26] andmany others.
متن کاملA Class of Globally Convergent Optimization Methods Based on Conservative Convex Separable Approximations
This paper deals with a certain class of optimization methods, based on conservative convex separable approximations (CCSA), for solving inequality-constrained nonlinear programming problems. Each generated iteration point is a feasible solution with lower objective value than the previous one, and it is proved that the sequence of iteration points converges toward the set of Karush–Kuhn–Tucker...
متن کاملRescaling and Stepsize Selection in Proximal Methods Using Separable Generalized Distances
This paper presents a convergence proof technique for a broad class of proximal algorithms in which the perturbation term is separable and may contain barriers enforcing interval constraints. There are two key ingredients in the analysis: a mild regularity condition on the differential behavior of the barrier as one approaches an interval boundary, and a lower stepsize limit that takes into acc...
متن کاملNew infeasible interior-point algorithm based on monomial method
We propose a new infeasible path-following algorithm for convex linearlyconstrained quadratic programming problem. This algorithm utilizes the monomial method rather than Newton's method for solving the KKT equations at each iteration. As a result, the sequence of iterates generated by this new algorithm is infeasible in the primal and dual linear constraints, but, unlike the sequence of iterat...
متن کامل