نتایج جستجو برای: Stochastic Gradient Descent Learning

تعداد نتایج: 840759  

2014
George Papamakarios

Gradient-based optimization methods are popular in machine learning applications. In large-scale problems, stochastic methods are preferred due to their good scaling properties. In this project, we compare the performance of four gradient-based methods; gradient descent, stochastic gradient descent, semi-stochastic gradient descent and stochastic average gradient. We consider logistic regressio...

Journal: :SIAM J. Financial Math. 2017
Justin A. Sirignano Konstantinos Spiliopoulos

We consider stochastic gradient descent for continuous-time models. Traditional approaches for the statistical estimation of continuous-time models, such as batch optimization, can be impractical for large datasets where observations occur over a long period of time. Stochastic gradient descent provides a computationally efficient method for such statistical learning problems. The stochastic gr...

Journal: :CoRR 2015
Andrew J. R. Simpson

In a recent article we described a new type of deep neural network– a Perpetual Learning Machine (PLM) – which is capable of learning ‘on the fly’ like a brain by existing in a state of Perpetual Stochastic Gradient Descent (PSGD). Here, by simulating the process of practice, we demonstrate both selective memory and selective forgetting when we introduce statistical recall biases during PSGD. F...

Journal: :Journal of Scientific Computing 2021

Stochastic gradient descent (SGD) for strongly convex functions converges at the rate $$\mathcal {O}(1/k)$$ . However, achieving good results in practice requires tuning parameters (for example learning rate) of algorithm. In this paper we propose a generalization Polyak step size, used subgradient methods, to stochastic descent. We prove non-asymptotic convergence with constant which can be be...

Journal: :IEEE Control Systems Letters 2022

We systematically develop a learning-based treatment of stochastic optimal control (SOC), relying on direct optimization parametric policies. propose derivation adjoint sensitivity results for differential equations through application variational calculus. Then, given an objective function predetermined task specifying the desiderata controller, we optimize their parameters via iterative gradi...

2002
Nicol N. Schraudolph Thore Graepel

The method of conjugate gradients provides a very effective way to optimize large, deterministic systems by gradient descent. In its standard form, however, it is not amenable to stochastic approximation of the gradient. Here we explore ideas from conjugate gradient in the stochastic (online) setting, using fast Hessian-gradient products to set up low-dimensional Krylov subspaces within individ...

2003
Nicol N. Schraudolph Thore Graepel

The method of conjugate directions provides a very effective way to optimize large, deterministic systems by gradient descent. In its standard form, however, it is not amenable to stochastic approximation of the gradient. Here we explore ideas from conjugate gradient in the stochastic (online) setting, using fast Hessian-gradient products to set up low-dimensional Krylov subspaces within indivi...

2014
Atsushi Nitanda

Proximal gradient descent (PGD) and stochastic proximal gradient descent (SPGD) are popular methods for solving regularized risk minimization problems in machine learning and statistics. In this paper, we propose and analyze an accelerated variant of these methods in the mini-batch setting. This method incorporates two acceleration techniques: one is Nesterov’s acceleration method, and the othe...

Journal: :IEEE Transactions on Neural Networks and Learning Systems 2020

Journal: :journal of ai and data mining 2015
f. alibakhshi m. teshnehlab m. alibakhshi m. mansouri

the stability of learning rate in neural network identifiers and controllers is one of the challenging issues which attracts great interest from researchers of neural networks. this paper suggests adaptive gradient descent algorithm with stable learning laws for modified dynamic neural network (mdnn) and studies the stability of this algorithm. also, stable learning algorithm for parameters of ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید