نتایج جستجو برای: newton iteration method

تعداد نتایج: 1663489  

2013
Robert J. Renka

Least squares methods are effective for solving systems of partial differential equations. In the case of nonlinear systems the equations are usually linearized by a Newton iteration or successive substitution method, and then treated as a linear least squares problem. We show that it is often advantageous to form a sum of squared residuals first, and then compute a zero of the gradient with a ...

2007
Xueping Guo X. P. GUO

Inexact Newton methods are constructed by combining Newton’s method with another iterative method that is used to solve the Newton equations inexactly. In this paper, we establish two semilocal convergence theorems for the inexact Newton methods. When these two theorems are specified to Newton’s method, we obtain a different Newton-Kantorovich theorem about Newton’s method. When the iterative m...

2007
Xueping Guo X. P. GUO

Inexact Newton methods are constructed by combining Newton’s method with another iterative method that is used to solve the Newton equations inexactly. In this paper, we establish two semilocal convergence theorems for the inexact Newton methods. When these two theorems are specified to Newton’s method, we obtain a different Newton-Kantorovich theorem about Newton’s method. When the iterative m...

Journal: :Numerische Mathematik 2006
Xiao-Xia Guo Wen-Wei Lin Shu-Fang Xu

In this paper we propose a structure-preserving doubling algorithm (SDA) for computing the minimal nonnegative solutions to the nonsymmetric algebraic Riccati equation (NARE) based on the techniques developed in the symmetric cases. This method allows the simultaneous approximation of the minimal nonnegative solutions of the NARE and its dual equation, only requires the solutions of two linear ...

Journal: :caspian journal of mathematical sciences 2014
h. esmaeili m. rostami

‎in this paper‎, ‎we present a new modification of chebyshev-halley‎ ‎method‎, ‎free from second derivatives‎, ‎to solve nonlinear equations‎. ‎the convergence analysis shows that our modification is third-order‎ ‎convergent‎. ‎every iteration of this method requires one function and‎ ‎two first derivative evaluations‎. ‎so‎, ‎its efficiency index is‎ ‎$3^{1/3}=1.442$ that is better than that o...

1998
P. J. van der Houwen

We consider implicit integration methods for the numerical solution of sti initial-value problems. In applying such methods, the implicit relations are usually solved by Newton iteration. However, it often happens that in subintervals of the integration interval the problem is nonsti or mildly sti with respect to the stepsize. In these nonsti subintervals, we do not need the (expensive) Newton ...

1997
Michael Drexler

Newton's Method constitutes a nested iteration scheme with the Newton step as the outer iteration and a linear solver of the Jacobian system as the inner iteration. We examine the interaction between these two schemes and derive solution techniques for the linear system from the properties of the outer Newton iteration. Contrary to inexact Newton methods, our techniques do not rely on relaxed t...

Journal: :Math. Comput. 1998
Chun-Hua Guo Peter Lancaster

When Newton’s method is applied to find the maximal symmetric solution of an algebraic Riccati equation, convergence can be guaranteed under moderate conditions. In particular, the initial guess need not be close to the solution. The convergence is quadratic if the Fréchet derivative is invertible at the solution. In this paper we examine the behaviour of the Newton iteration when the derivativ...

2013
TIMOTHY A. DAVIS WILLIAM W. HAGER JAMES T. HUNGERFORD

This paper considers the problem of minimizing a convex, separable quadratic function subject to a knapsack constraint and a box constraint. An algorithm called NAPHEAP is developed for solving this problem. The algorithm solves the Karush-Kuhn-Tucker system using a starting guess to the optimal Lagrange multiplier and updating the guess monotonically in the direction of the solution. The start...

Journal: :CoRR 2017
Aryan Mokhtari Mark Eisen Alejandro Ribeiro

The problem of minimizing an objective that can be written as the sum of a set of n smooth and strongly convex functions is challenging because the cost of evaluating the function and its derivatives is proportional to the number of elements in the sum. The Incremental Quasi-Newton (IQN) method proposed here belongs to the family of stochastic and incremental methods that have a cost per iterat...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید