نتایج جستجو برای: newton quasi
تعداد نتایج: 102092 فیلتر نتایج به سال:
A derivative-free Quasi-Newton (DFQN) method previously published [J. Greenstadt, Math. Comp., v. 26, 1972, pp. 145—166] has been revised and simplified. The main modification has the effect of keeping all the successive approximants to the Hessian matrix positive-definite. This, coupled with some improvements in the line search, has enhanced the performance of the method considerably. The resu...
In this paper we propose an iterative learning control scheme based on the quasi-Newton method. The iterative learning control is designed to improve the performance of the systems working cyclically. We consider the general type of systems described by continuously diierentiable operator acting in Banach spaces. The suucient conditions for the convergence of quasi-Newton iterative learning alg...
We study how to use the BFGS quasi-Newton matrices to precondition minimization methods for problems where the storage is critical. We give an update formula which generates matrices using information from the last m iterations, where m is any number supplied by the user. The quasi-Newton matrix is updated at every iteration by dropping the oldest information and replacing it by the newest info...
Four decades after their invention, quasiNewton methods are still state of the art in unconstrained numerical optimization. Although not usually interpreted thus, these are learning algorithms that fit a local quadratic approximation to the objective function. We show that many, including the most popular, quasi-Newton methods can be interpreted as approximations of Bayesian linear regression u...
Analyses of the convergence properties of general quasi-Newton methods are presented, particular attention being paid to how the approximate solutions and the iteration matrices approach their final values. It is further shown that when Broyden's algorithm is applied to linear systems, the error norms are majorised by a superlinearly convergent sequence of an unusual kind.
Training in the random neural network (RNN) is generally speci®ed as the minimization of an appropriate error function with respect to the parameters of the network (weights corresponding to positive and negative connections). We propose here a technique for error minimization that is based on the use of quasi-Newton optimization techniques. Such techniques oer more sophisticated exploitation ...
We first recall some properties of infinite tridiagonal matrices considered as matrix transformations in sequence spaces of the forms sξ , sξ , s (c) ξ , or lp(ξ). Then, we give some results on the finite section method for approximating a solution of an infinite linear system. Finally, using a quasi-Newton method, we construct a sequence that converges fast to a solution of an infinite linear ...
A classical model of Newton iterations which takes into account some error terms is given by the quasi-Newton method, which assumes perturbed Jacobians at each step. Its high convergence orders were characterized by Dennis and Moré [Math. Comp. 28 (1974), 549–560]. The inexact Newton method constitutes another such model, since it assumes that at each step the linear systems are only approximat...
In this paper, we present a method for solving the nite nonlinear min-max problem. By using quasi-Newton methods, we approximately solve a sequence of diierentiable subproblems where, for each subproblem, the cost function to minimize is a global regularization underestimating the nite maximum function. We show that every cluster point of the sequence generated is a stationary point of the min-...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید