Memoryless Modified Symmetric Rank-One Method for Large-Scale Unconstrained Optimization
نویسندگان
چکیده
Problem statement: Memoryless QN methods have been regarded effective techniques for solving large-scale problems that can be considered as one step limited memory QN methods. In this study, we present a scaled memoryless modified Symmetric Rank-One (SR1) algorithm and investigate the numerical performance of the proposed algorithm for solving large-scale unconstrained optimization problems. Approach: The basic idea is to apply the modified Quasi-Newton (QN) equations, which uses both the gradients and the function values in two successive points in the frame of the scaled memoryless SR1 update, in which the modified SR1 update is reset, at every iteration, to the positive multiple of the identity matrix. The scaling of the identity is chosen such that the positive definiteness of the memoryless modified SR1 update is preserved. Results: Under some suitable conditions, the global convergence and rate of convergence are established. Computational results, for a test set consisting of 73 unconstrained optimization problems, show that the proposed algorithm is very encouraging. Conclusion/Recommendations: In this study a memoryless QN method developed for solving large-scale unconstrained optimization problems, in which the SR1 update based on the modified QN equation have applied. An important feature of the proposed method is that it preserves positive definiteness of the updates. The presented method owns global and R-linear convergence. Numerical results showed that the proposed method is encouraging comparing with the methods MMBFGS and FRCG.
منابع مشابه
Scaled memoryless symmetric rank one method for large-scale optimization
This paper concerns the memoryless quasi-Newton method, that is precisely the quasi-Newton method for which the approximation to the inverse of Hessian, at each step, is updated from the identity matrix. Hence its search direction can be computed without the storage of matrices. In this paper, a scaled memoryless symmetric rank one (SR1) method for solving large-scale unconstrained optimization...
متن کاملA limited memory adaptive trust-region approach for large-scale unconstrained optimization
This study concerns with a trust-region-based method for solving unconstrained optimization problems. The approach takes the advantages of the compact limited memory BFGS updating formula together with an appropriate adaptive radius strategy. In our approach, the adaptive technique leads us to decrease the number of subproblems solving, while utilizing the structure of limited memory quasi-Newt...
متن کاملA Three-terms Conjugate Gradient Algorithm for Solving Large-Scale Systems of Nonlinear Equations
Nonlinear conjugate gradient method is well known in solving large-scale unconstrained optimization problems due to it’s low storage requirement and simple to implement. Research activities on it’s application to handle higher dimensional systems of nonlinear equations are just beginning. This paper presents a Threeterm Conjugate Gradient algorithm for solving Large-Scale systems of nonlinear e...
متن کاملThe modified BFGS method with new secant relation for unconstrained optimization problems
Using Taylor's series we propose a modified secant relation to get a more accurate approximation of the second curvature of the objective function. Then, based on this modified secant relation we present a new BFGS method for solving unconstrained optimization problems. The proposed method make use of both gradient and function values while the usual secant relation uses only gradient values. U...
متن کاملA new quasi-Newton pattern search method based on symmetric rank-one update for unconstrained optimization
This paper proposes a new robust and quickly convergent pattern search method based on an implementation of OCSSR1 (Optimal Conditioning Based Self-Scaling Symmetric Rank-One) algorithm [M.R. Osborne, L.P. Sun, A new approach to symmetric rank-one updating, IMA Journal of Numerical Analysis 19 (1999) 497–507] for unconstrained optimization. This method utilizes the factorization of approximatin...
متن کامل