نتایج جستجو برای: quasi newton algorithm
تعداد نتایج: 844645 فیلتر نتایج به سال:
Newton-type methods and quasi-Newton methods have proven to be very successful in solving dense unconstrained optimization problems. Recently there has been considerable interest in extending these methods to solving large problems when the Hessian matrix has a known a priori sparsity pattern, This paper treats sparse quasi-Newton methods in a uniform fashion and shows the effect of loss of pos...
Solving an optimization problem whose objective function is the sum of two convex functions has received considerable interests in the context of image processing recently. In particular, we are interested in the scenario when a non-differentiable convex function such as the total variation (TV) norm is included in the objective function due to many variational models established in image proce...
This paper presents a provably convergent multifidelity optimization algorithm for unconstrained problems that does not require high-fidelity gradients. The method uses a radial basis function interpolation to capture the error between a high-fidelity function and a low-fidelity function. The error interpolation is added to the low-fidelity function to create a surrogate model of the high-fidel...
V sequence fxkg. Similar result is also true for quasiNewton methods with trust region (see [16]). Another type of special quasi-Newton methods is that the quasi-Newton matrices are sparse. It is quite often that large-scale problems have separable structure, which leads to special structure of the Hessian matrices. In such cases we can require the quasiNewton matrices to have similar structures.
In this paper, non-monotone line search procedure is studied, which is combined with the non-quasi-Newton family. Under the uniformly convexity assumption on objective function, the global and superlinear convergence of the non-quasi-Newton family with the proposed nonmonotone line search is proved under suitable conditions.
Random Perturbation of the Variable Metric Method for Unconstrained Nonsmooth Nonconvex Optimization
We consider the global optimization of a nonsmooth (nondifferentiable) nonconvex real function. We introduce a variable metric descent method adapted to nonsmooth situations, which is modified by the incorporation of suitable random perturbations. Convergence to a global minimum is established and a simple method for the generation of suitable perturbations is introduced. An algorithm is propos...
The effect of nonlinearly scaling the objective function on the variable-metric method is investigated, and Broyden's update is modified so that a property of invariancy to the scaling is satisfied. A new three-parameter class of updates is generated, and criteria for an optimal choice of the parameters are given, Numerical experiments compare the performance of a number of algorithms of the re...
Mixture of experts (ME) is a modular neural network architecture for supervised learning. A double-loop Expectation-Maximization (EM) algorithm has been introduced to the ME architecture for adjusting the parameters and the iteratively reweighted least squares (IRLS) algorithm is used to perform maximization in the inner loop [Jordan, M.I., Jacobs, R.A. (1994). Hierarchical mixture of experts a...
We compare the performance of several robust large-scale minimization algorithms applied for the minimization of the cost functional in the solution of inverse problems related to parameter estimation applied to the parabolized Navier-Stokes equations. The methods compared consist of Quasi-Newton (BFGS), a limited memory Quasi-Newton (L-BFGS) [1], Hessian Free Newton method [2] and a new hybrid...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید