نتایج جستجو برای: line search methods
تعداد نتایج: 2434504 فیلتر نتایج به سال:
We consider unconstrained minimization of a finite sum of N continuously differentiable, not necessarily convex, cost functions. Several gradientlike (and more generally, line search) methods, where the full gradient (the sum of N component costs’ gradients) at each iteration k is replaced with an inexpensive approximation based on a sub-sample Nk of the component costs’ gradients, are availabl...
Beam search (BS) is used as a heuristic to solve various combinatorial optimization problems, ranging from scheduling to assembly line balancing. In this paper, we develop a backtracking and an exchange-of-information (EOI) procedure to enhance the traditional beam search method. The backtracking enables us to return to previous solution states in the search process with the expectation of obta...
We evaluate the performance of different optimization techniques developed in the context of optical flow computation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we develop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluat...
Line search methods are proposed for nonlinear programming using Fletcher and Leyffer’s filter method, which replaces the traditional merit function. Their global convergence properties are analyzed. The presented framework is applied to active set SQP and barrier interior point algorithms. Under mild assumptions it is shown that every limit point of the sequence of iterates generated by the al...
Neural Network Learning algorithms based on Conjugate Gradient Techniques and Quasi Newton Techniques such as Broyden, DFP, BFGS, and SSVM algorithms require exact or inexact line searches in order to satisfy their convergence criteria. Line searches are very costly and slow down the learning process. This paper will present new Neural Network learning algorithms based on Hoshino's weak line se...
In this paper, the Dai-Kou type conjugate gradient methods are developed to solve the optimality condition of an unconstrained optimization, they only utilize gradient information and have broader application scope. Under suitable conditions, the developed methods are globally convergent. Numerical tests and comparisons with the PRP+ conjugate gradient method only using gradient show that the m...
It has been noticed by Wächter and Biegler that a number of interior point methods for nonlinear programming based on line search strategy may generate a sequence converging to an infeasible point. We show that by adopting a suitable merit function, a modified primal-dual equation, and a proper line search procedure, a class of interior point methods of line search type will generate a sequence...
A fundamental class of matrix optimization problems that arise in many areas of science and engineering is that of quadratic optimization with orthogonality constraints. Such problems can be solved using line-search methods on the Stiefel manifold, which are known to converge globally under mild conditions. To determine the convergence rates of these methods, we give an explicit estimate of the...
The generalized Nash equilibrium problem (GNEP) is an extension of the standard Nash game where both the utility functions and the strategy spaces of each player also depend on the strategies chosen by all other players. This problem is rather difficult to solve, and there are only a few methods available in the literature. One of the most popular ones is the so-called relaxation method which i...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید