نتایج جستجو برای: newton iteration method
تعداد نتایج: 1663489 فیلتر نتایج به سال:
Signal and image restoration problems are often solved by minimizing a cost function consisting of an `2 data-fidelity term and a regularization term. We consider a class of convex and edge-preserving regularization functions. In specific, half-quadratic regularization as a fixed-point iteration method is usually employed to solve this problem. The main aim of this paper is to solve the above-d...
The paper studies the solution of stochastic optimization problems in which approximations to the gradient and Hessian are obtained through subsampling. We first consider Newton-like methods that employ these approximations and discuss how to coordinate the accuracy in the gradient and Hessian to yield a superlinear rate of convergence in expectation. The second part of the paper analyzes an in...
It often happens that iteration processes used for solving the implicit relations arising in ODE-IVP methods only start to converge rapidly after a certain number of iterations. Fast convergence right from the beginning is particularly important if we want to use so-called step-parallel iteration in which the iteration method is concurrently applied at a number of step points. In this paper, we...
Newton-Krylov methods, primarily using the Jacobian-Free Newton-Krylov (JFNK) approximation, are examined as an alternative to the traditional power iteration method for the calculation of the fundamental eigenmode in reactor analysis applications based on diffusion theory. One JFNK approach can be considered an acceleration technique for the standard power iteration as it is “wrapped around” t...
Newton’s iteration is modified for the computation of the group inverses of singular Toeplitz matrices. At each iteration, the iteration matrix is approximated by a matrix with a low displacement rank. Because of the displacement structure of the iteration matrix, the matrix-vector multiplication involved in Newton’s iteration can be done efficiently. We show that the convergence of the modifie...
In recent years, implicit stochastic Runge–Kutta (SRK) methods have been developed both for strong and weak approximations. For these methods, the stage values are only given implicitly. However, in practice these implicit equations are solved by iterative schemes such as simple iteration, modified Newton iteration or full Newton iteration. We employ a unifying approach for the construction of ...
This paper develops truncated Newton methods as an appropriate tool for nonlinear inverse problems which are ill-posed in the sense of Hadamard. In each Newton step an approximate solution for the linearized problem is computed with the conjugate gradient method as an inner iteration. The conjugate gradient iteration is terminated when the residual has been reduced to a prescribed percentage. U...
For a square system of analytic equations, a Newton-invariant subspace is a set which contains the resulting point of a Newton iteration applied to each point in the subspace. For example, if the equations have real coefficients, then the set of real points form a Newtoninvariant subspace. Starting with any point for which Newton’s method quadratically converges to a solution, this article uses...
For unconstrained optimization, Newton-type methods have good convergence properties, and areused in practice. The Newton’s method combined with a trust-region method (the TR-Newtonmethod), the cubic regularization of Newton’s method and the regularized Newton method withline search methods are such Newton-type methods. The TR-Newton method and the cubic regu-larization of N...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید