نتایج جستجو برای: gradient descent

تعداد نتایج: 137892  

2017
VLADIMIR SOTIROV

!! Ui // X The modules Mi are “restrictions“ or pullbacks of M along Ui → X, and are compatible in that if Mi,j is the pullback of Mi over Ui along Ui ×X Uj → Ui, then we have isomorphisms Mi,j ∼= Mj,i that satisfy a coherence condition known as the cocycle condition. We say that modules over affine schemes have the property of descent for a family {Ui → X} if any family of compatible modules M...

2010

..................................................................................................................... 3

1995
S. Buckley P. Koskela G. Lu

We obtain (weighted) Poincaré type inequalities for vector fields satisfying the Hörmander condition for p < 1 under some assumptions on the subelliptic gradient of the function. Such inequalities hold on Boman domains associated with the underlying CarnotCarathéodory metric. In particular, they remain true for solutions to certain classes of subelliptic equations. Our results complement the ea...

2005
Christopher J.C. Burges

We propose a new method for learning structured outputs using gradient descent.

Journal: :CoRR 2018
Varun Ranganathan S. Natarajan

The backpropagation algorithm, which had been originally introduced in the 1970s, is the workhorse of learning in neural networks. This backpropagation algorithm makes use of the famous machine learning algorithm known as Gradient Descent, which is a first-order iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function using gradient descent, ...

2018
Rahul Kidambi Praneeth Netrapalli Prateek Jain Sham M. Kakade

Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov’s accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). Rigorously speaking, “fast gradient” methods have provable improvements over gradient descent...

2018

Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov’s accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). In general, “fast gradient” methods have provable improvements over gradient descent only for...

Journal: :Photonics 2021

For a high-power slab solid-state laser, obtaining high output power and beam quality are the most important indicators. Adaptive optics systems can significantly improve qualities by compensating for phase distortions of laser beams. In this paper, we developed an improved algorithm called Gradient Estimation Stochastic Parallel Descent (AGESPGD) cleanup laser. A second-order gradient search p...

Journal: :IEEE Transactions on Neural Networks and Learning Systems 2020

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید