نتایج جستجو برای: gradient descent algorithm

تعداد نتایج: 869527  

Journal: :Probl. Inf. Transm. 2005
Anatoli Juditsky Alexander V. Nazin Alexandre B. Tsybakov Nicolas Vayatis

We consider a recursive algorithm to construct an aggregated estimator from a finite number of base decision rules in the classification problem. The estimator approximately minimizes a convex risk functional under the l1-constraint. It is defined by a stochastic version of the mirror descent algorithm (i.e., of the method which performs gradient descent in the dual space) with an additional av...

2002
David Ridout Kevin Judd

Gradient descent noise reduction is a technique that attempts to recover the true signal, or trajectory, from noisy observations of a non-linear dynamical system for which the dynamics are known. This paper provides the first rigorous proof that the algorithm will recover the original trajectory for a broad class of dynamical systems under certain conditions. The proof is obtained using ideas f...

Journal: :CoRR 2017
Chi Jin Praneeth Netrapalli Michael I. Jordan

Nesterov's accelerated gradient descent (AGD), an instance of the general family of"momentum methods", provably achieves faster convergence rate than gradient descent (GD) in the convex setting. However, whether these methods are superior to GD in the nonconvex setting remains open. This paper studies a simple variant of AGD, and shows that it escapes saddle points and finds a second-order stat...

2017
Yutian Chen Matthew W. Hoffman Sergio Gomez Colmenarejo Misha Denil Timothy P. Lillicrap Matthew Botvinick Nando de Freitas

We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-paramete...

2008
Rick Chartrand Valentina Staneva

We propose an algorithm for segmentation of grayscale images. Our algorithm computes a solution to the convex, unconstrained minimization problem proposed by T. Chan, S. Esedoḡlu, and M. Nikolova in [1], which is closely related to the Chan-Vese level set algorithm for the Mumford-Shah segmentation model. Up to now this problem has been solved with a gradient descent method. Our approach is a q...

2012
Boris Mailhé Mark D. Plumbley

This work presents a new algorithm for dictionary learning. Existing algorithms such as MOD and K-SVD often fail to find the best dictionary because they get trapped in a local minimum. Olshausen and Field’s Sparsenet algorithm relies on a fixed step projected gradient descent. With the right step, it can avoid local minima and converge towards the global minimum. The problem then becomes to fi...

Journal: :IEEE Trans. Signal Processing 1998
Jens Baltersee Jonathon A. Chambers

New learning algorithms for an adaptive nonlinear forward predictor that is based on a pipelined recurrent neural network (PRNN) are presented. A computationally efficient gradient descent (GD) learning algorithm, together with a novel extended recursive least squares (ERLS) learning algorithm, are proposed. Simulation studies based on three speech signals that have been made public and are ava...

Journal: :Journal of Machine Learning Research 2015
Maren Mahsereci Philipp Hennig

In deterministic optimization problems, line search routines are a standard tool ensuring stability and efficiency. In the stochastic setting, no direct equivalent has so far been formulated, because uncertain gradients do not allow for a strict sequence of decisions collapsing the search space. We construct a probabilistic version of the line search paradigm, by combining the structure of exis...

Journal: :Appl. Math. Lett. 2008
Neculai Andrei

A modification of the Dai-Yuan conjugate gradient algorithm is proposed. Using the exact line search, the algorithm reduces to the original version of the Dai and Yuan computational scheme. For inexact line search the algorithm satisfies both the sufficient descent and conjugacy condition. A global convergence result is proved when the Wolfe line search conditions are used. Computational result...

2003
Suchendra M. Bhandarkar Jinling Huang Jonathan Arnold

Physical map reconstruction in the presence of er rors is a central problem in genetics of high compu tational complexity A parallel genetic algorithm for a maximum likelihood estimation based approach to physical map reconstruction is presented The estima tion procedure entails gradient descent search for de termining the optimal spacings between probes for a given probe ordering The optimal p...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید