نتایج جستجو برای: newton quasi

تعداد نتایج: 102092  

2008
Jin Yu

We extend the well-known BFGS quasi-Newton method and its limited-memory variant (LBFGS) to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We apply the resulting subLBFGS algorithm to L2-reg...

Journal: :Math. Program. 2013
Damián R. Fernández

The quasi-Newton strategy presented in this paper preserves one of the most important features of the stabilized Sequential Quadratic Programming method, the local convergence without constraint qualifications assumptions. It is known that the primal-dual sequence converges quadratically assuming only the second-order sufficient condition. In this work, we show that if the matrices are updated ...

Journal: :SIAM Journal on Optimization 2015
Philipp Hennig

This paper proposes a probabilistic framework for algorithms that iteratively solve unconstrained linear problems Bx = b with positive definite B for x. The goal is to replace the point estimates returned by existing methods with a Gaussian posterior belief over the elements of the inverse of B, which can be used to estimate errors. Recent probabilistic interpretations of the secant family of q...

Journal: :Optimization Methods and Software 2009
Osman Güler Filiz Gürtuna Olena Shevchenko

It is known that quasi–Newton updates can be characterized by variational means, sometimes in more than one way. This paper has two main goals. We first formulate variational problems appearing in quasi-Newton methods within the vector space of symmetric matrices. This simplifies both their formulations and their subsequent solutions. We then construct, for the first time, duals of the variatio...

2011
Michael W. Trosset

Quasi-Newton methods for numerical optimization exploit quadratic Taylor polynomial models of the objective function. Trust regions are widely used to ensure the global convergence of these methods. Analogously, response surface methods for stochastic optimization exploit linear and quadratic regression models of the objective function. Ridge analysis is widely used to safeguard the optimizatio...

2005

Quasi-Newton algorithms for unconstrained nonlinear minimization generate a sequence of matrices that can be considered as approximations of the objective function second derivatives. This paper gives conditions under which these approximations can be proved to converge globally to the true Hessian matrix, in the case where the Symmetric Rank One update formula is used. The rate of convergence ...

2013
Asen L. Dontchev

This paper is a continuation of our previous paper [3] were we presented generalizations of the Dennis-Moré theorem to characterize q-superliner convergences of quasi-Newton methods for solving equations and variational inequalities in Banach spaces. Here we prove Dennis-Moré type theorems for inexact quasi-Newton methods applied to variational inequalities in finite dimensions. We first consid...

2013
Rafal Zdunek Anh Huy Phan Andrzej Cichocki

Several variants of Nonnegative Matrix Factorization (NMF) have been proposed for supervised classification of various objects. Graph regularized NMF (GNMF) incorporates the information on the data geometric structure to the training process, which considerably improves the classification results. However, the multiplicative algorithms used for updating the underlying factors may result in a sl...

2016
Julien Pérolat Bilal Piot Matthieu Geist Bruno Scherrer Olivier Pietquin

This paper reports theoretical and empirical investigations on the use of quasi-Newton methods to minimize the Optimal Bellman Residual (OBR) of zero-sum two-player Markov Games. First, it reveals that state-of-the-art algorithms can be derived by the direct application of Newton’s method to different norms of the OBR. More precisely, when applied to the norm of the OBR, Newton’s method results...

2012
David Picard Nicolas Thome Matthieu Cord Alain Rakotomamonjy

We propose a novel algorithm for learning a geometric combination of Gaussian kernel jointly with a SVM classifier. This problem is the product counterpart of MKL, with restriction to Gaussian kernels. Our algorithm finds a local solution by alternating a Quasi-Newton gradient descent over the kernels and a classical SVM solver over the instances. We show promising results on well known data se...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید