نتایج جستجو برای: gradient descent
تعداد نتایج: 137892 فیلتر نتایج به سال:
!! Ui // X The modules Mi are “restrictions“ or pullbacks of M along Ui → X, and are compatible in that if Mi,j is the pullback of Mi over Ui along Ui ×X Uj → Ui, then we have isomorphisms Mi,j ∼= Mj,i that satisfy a coherence condition known as the cocycle condition. We say that modules over affine schemes have the property of descent for a family {Ui → X} if any family of compatible modules M...
..................................................................................................................... 3
We obtain (weighted) Poincaré type inequalities for vector fields satisfying the Hörmander condition for p < 1 under some assumptions on the subelliptic gradient of the function. Such inequalities hold on Boman domains associated with the underlying CarnotCarathéodory metric. In particular, they remain true for solutions to certain classes of subelliptic equations. Our results complement the ea...
We propose a new method for learning structured outputs using gradient descent.
The backpropagation algorithm, which had been originally introduced in the 1970s, is the workhorse of learning in neural networks. This backpropagation algorithm makes use of the famous machine learning algorithm known as Gradient Descent, which is a first-order iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function using gradient descent, ...
Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov’s accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). Rigorously speaking, “fast gradient” methods have provable improvements over gradient descent...
Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov’s accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). In general, “fast gradient” methods have provable improvements over gradient descent only for...
For a high-power slab solid-state laser, obtaining high output power and beam quality are the most important indicators. Adaptive optics systems can significantly improve qualities by compensating for phase distortions of laser beams. In this paper, we developed an improved algorithm called Gradient Estimation Stochastic Parallel Descent (AGESPGD) cleanup laser. A second-order gradient search p...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید