نتایج جستجو برای: hessian matrix

تعداد نتایج: 366902  

Journal: :SIAM Review 1991
Philip E. Gill Walter Murray Michael A. Saunders Margaret H. Wright

Active-set quadratic programming (QP) methods use a working set to define the search direction and multiplier estimates. In the method proposed by Fletcher in 1971, and in several subsequent mathematically equivalent methods, the working set is chosen to control the inertia of the reduced Hessian, which is never permitted to have more than one nonpositive eigenvalue. (We call such methods inert...

Journal: :Physics in medicine and biology 2005
Martin Schweiger Simon R Arridge Ilkka Nissilä

We present a regularized Gauss-Newton method for solving the inverse problem of parameter reconstruction from boundary data in frequency-domain diffuse optical tomography. To avoid the explicit formation and inversion of the Hessian which is often prohibitively expensive in terms of memory resources and runtime for large-scale problems, we propose to solve the normal equation at each Newton ste...

1997
Jae Dong Noh Doochul Kim

We investigate the universal property of curvatures in surface models that display a flat phase and a rough phase whose criticality is described by the Gaussian model. Earlier we derived a relation between the Hessian of the free energy and the Gaussian coupling constant in the six-vertex model. Here we show its validity in a general setting using renormalization group arguments. The general va...

Journal: :J. Global Optimization 2004
Xin Chen Houduo Qi Liqun Qi Kok Lay Teo

In this paper, we consider smooth convex approximations to the maximum eigenvalue function. To make it applicable to a wide class of applications, the study is conducted on the composite function of the maximum eigenvalue function and a linear operator mapping m to n, the space of n-by-n symmetric matrices. The composite function in turn is the natural objective function of minimizing the maxim...

Journal: :SIAM J. Numerical Analysis 2009
Néstor E. Aguilera Pedro Morin

Many problems of theoretical and practical interest involve finding a convex or concave function. For instance, optimization problems such as finding the projection on the convex functions in Hk(Ω), or some problems in economics. In the continuous setting and assuming smoothness, the convexity constraints may be given locally by asking the Hessian matrix to be positive semidefinite, but in maki...

1999
Eduardo F. D'Azevedo

This work considers the effectiveness of using anisotropic coordinate transformation in adaptive mesh generation. The anisotropic coordinate transformation is derived by interpreting the Hessian matrix of the data function as a metric tensor that measures the local approximation error. The Hessian matrix contains information about the local curvature of the surface and gives guidance in the asp...

Journal: :Math. Program. 1983
Mukund N. Thapa

Newton-type methods and quasi-Newton methods have proven to be very successful in solving dense unconstrained optimization problems. Recently there has been considerable interest in extending these methods to solving large problems when the Hessian matrix has a known a priori sparsity pattern, This paper treats sparse quasi-Newton methods in a uniform fashion and shows the effect of loss of pos...

Journal: :CoRR 2017
Huishuai Zhang Caiming Xiong James Bradbury Richard Socher

Second-order methods for neural network optimization have several advantages over methods based on first-order gradient descent, including better scaling to large mini-batch sizes and fewer updates needed for convergence. But they are rarely applied to deep learning in practice because of high computational cost and the need for model-dependent algorithmic variations. We introduce a variant of ...

2006
Samuel R. Buss

The traditional quasi-Newton method for updating the approximate Hessian is based on the change in the gradient of the objective function. This paper describes a new update method that incorporates also the change in the value of the function. The method effectively uses a cubic approximation of the objective function to better approximate its directional second derivative. The cubic approximat...

2010
Sanjeev S. Malalur Michael T. Manry

A batch training algorithm for feed-forward networks is proposed which uses Newton’s method to estimate a vector of optimal learning factors, one for each hidden unit. Backpropagation, using this learning factor vector, is used to modify the hidden unit’s input weights. Linear equations are then solved for the network’s output weights. Elements of the new method’s Gauss-Newton Hessian matrix ar...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید