نتایج جستجو برای: newton quasi
تعداد نتایج: 102092 فیلتر نتایج به سال:
The smoothness-constrained least-squares method is widely used for two-dimensional (2D) and three-dimensional (3D) inversion of apparent resistivity data sets. The Gauss–Newton method that recalculates the Jacobian matrix of partial derivatives for all iterations is commonly used to solve the least-squares equation. The quasi-Newton method has also been used to reduce the computer time. In this...
We consider a family of damped quasi-Newton methods for solving unconstrained optimization problems. This family resembles that of Broyden with line searches, except that the change in gradients is replaced by a certain hybrid vector before updating the current Hessian approximation. This damped technique modifies the Hessian approximations so that they are maintained sufficiently positive defi...
<span><span>Quasi-Newton methods are a class of numerical for </span>solving the problem unconstrained optimization. To improve overall efficiency resulting algorithms, we use quasi-Newton which is interesting equation. In this manuscript, present modified BFGS update formula based on new equation, give search direction solving optimizations proplems. We analyse convergence ra...
The problem of minimizing an objective that can be written as the sum of a set of n smooth and strongly convex functions is challenging because the cost of evaluating the function and its derivatives is proportional to the number of elements in the sum. The Incremental Quasi-Newton (IQN) method proposed here belongs to the family of stochastic and incremental methods that have a cost per iterat...
A quasi-Newton algorithm for semi-infinite programming using an Leo exact penalty function is described, and numerical results are presented. Comparisons with three Newton algorithms and one other quasi-Newton algorithm show that the algorithm is very promising in practice. AMS classifications: 65K05,90C30.
We describe stochastic Newton and stochastic quasi-Newton approaches to efficiently solve large linear least-squares problems where the very large data sets present a significant computational burden (e.g., the size may exceed computer memory or data are collected in real-time). In our proposed framework, stochasticity is introduced in two different frameworks as a means to overcome these compu...
artificial neural networks have the advantages such as learning, adaptation, fault-tolerance, parallelism and generalization. this paper is a scrutiny on the application of diverse learning methods in speed of convergence in neural networks. for this aim, first we introduce a perceptron method based on artificial neural networks which has been applied for solving a non-singula...
In this paper, we consider the problem of efficiently computing the eigenvalues of limited-memory quasi-Newton matrices that exhibit a compact formulation. In addition, we produce a compact formula for quasi-Newton matrices generated by any member of the Broyden convex class of updates. Our proposed method makes use of efficient updates to the QR factorization that substantially reduces the cos...
Quasi-Newton methods are widely used in practise for convex loss minimization problems. These methods exhibit good empirical performance on a wide variety of tasks and enjoy super-linear convergence to the optimal solution. For largescale learning problems, stochastic Quasi-Newton methods have been recently proposed. However, these typically only achieve sub-linear convergence rates and have no...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید