نتایج جستجو برای: squares and newton
تعداد نتایج: 16835918 فیلتر نتایج به سال:
We develop a local convergence of an iterative method for solving nonlinear least squares problems with operator decomposition under the classical and generalized Lipschitz conditions. consider case both zero nonzero residuals determine their orders. use two types conditions (center restricted region conditions) to study method. Moreover, we obtain larger radius tighter error estimates than in ...
The Matlab implementation of a trust-region Gauss-Newton method for boundconstrained nonlinear least-squares problems is presented. The solver, called TRESNEI, is adequate for zero and small-residual problems and handles the solution of nonlinear systems of equalities and inequalities. The structure and the usage of the solver are described and an extensive numerical comparison with functions f...
Based on minimizing a piecewise differentiable lp function subject to a single inequality constraint, this paper discusses algorithms for a discretized regularization problem for ill-posed inverse problems. We examine computational challenges of solving this regularization problem. Possible minimization algorithms such as the steepest descent method, iteratively weighted least squares (IRLS) me...
We compare the convergence performance of different numerical schemes for computing the fundamental matrix from point correspondences over two images. First, we state the problem and the associated KCR lower bound. Then, we describe the algorithms of three well-known methods: FNS, HEIV, and renormalization, to which we add Gauss-Newton iterations. For initial values, we test random choice, leas...
In this paper, we consider solving the robust linear regression problem by an inexact Newton method and an iteratively reweighted least squares method. We show that each of these methods can be combined with the preconditioned conjugate gradient least square algorithm to solve large, sparse systems of linear equations efficiently. We consider the constant preconditioner and preconditioners base...
We describe a generalized Levenberg-Marquardt method for computing critical points of the Ginzburg-Landau energy functional which models superconductivity. The algorithm is a blend of a Newton iteration with a Sobolev gradient descent method, and is equivalent to a trust-region method in which the trustregion radius is defined by a Sobolev metric. Numerical test results demonstrate the method t...
A penalized least squares approach known as Tikhonov regularization is commonly used to estimate distributed parameters in partial diierential equations. The application of quasi-Newton minimization methods then yields very large linear systems. While these systems are not sparse, sparse matrices play an important role in gradient evaluation and Hessian matrix-vector multiplications. Motivated ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید