نتایج جستجو برای: newton quasi
تعداد نتایج: 102092 فیلتر نتایج به سال:
We describe a parallel method for unconstrained optimization based on the quasi-Newton descent method of Broyden, Fletcher, Goldfarb, and Shanno. Our algorithm is suitable for both single-instruction and multiple-instruction parallel architectures and has only linear memory requirements in the number of parameters used to ®t the data. We also present the results of numerical testing on both sin...
A new method for the solution of minimization problems with simple bounds is presented. Global convergence of a general scheme requiring the solution of a single linear system at each iteration is proved and a superlinear convergence rate is established without requiring the strict complementarity assumption. The theory presented covers Newton and Quasi-Newton methods, allows rapid changes in t...
In this report we present computational methods for the best multilinear rank approximation problem. We consider algorithms build on quasi-Newton methods operating on product of Grassmann manifolds. Specifically we test and compare methods based on BFGS and L-BFGS updates in local and global coordinates with the Newton-Grassmann and alternating least squares methods. The performance of the quas...
In this paper we introduce a local convergence theory for Least Change Secant Update methods. This theory includes most known methods of this class, as well as some new interesting quasi-Newton methods. Further, we prove that this class of LCSU updates may be used to generate iterative linear methods to solve the Newton linear equation in the Inexact-Newton context. Convergence at a ¡j-superlin...
In this study we proposed two Quasi-Newton methods to deal with traffic assignment in the capacitated network. The methods combine Newton formula, column generation and penalty techniques. The first method employ the gradient of the objective function to obtain an improving feasible direction scaled by the second-order derivatives. The second one is to employ Rosen gradient to obtain an improvi...
We consider projected Newton-type methods for solving large-scale optimization problems arising in machine learning and related fields. We first introduce an algorithmic framework for projected Newton-type methods by reviewing a canonical projected (quasi-)Newton method. This method, while conceptually pleasing, has a high computation cost per iteration. Thus, we discuss two variants that are m...
Augmented Lagrangian methods for large-scale optimization usually require efficient algorithms for minimization with box constraints. On the other hand, active-set box-constraint methods employ unconstrained optimization algorithms for minimization inside the faces of the box. Several approaches may be employed for computing internal search directions in the large-scale case. In this paper a mi...
The EM algorithm is one of the most commonly used methods of maximum likelihood estimation. In many practical applications, it converges at a frustratingly slow linear rate. The current paper considers an acceleration of the EM algorithm based on classical quasi-Newton optimization techniques. This acceleration seeks to steer the EM algorithm gradually toward the Newton-Raphson algorithm, which...
In part I of this article, we proposed a Lagrange–Newton–Krylov–Schur (LNKS) method for the solution of optimization problems that are constrained by partial differential equations. LNKS uses Krylov iterations to solve the linearized Karush–Kuhn–Tucker system of optimality conditions in the full space of states, adjoints, and decision variables, but invokes a preconditioner inspired by reduced ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید