نتایج جستجو برای: gradient descent algorithm

تعداد نتایج: 869527  

Ahmad Jafarian Raheleh Jafari Safa Measoomy nia

Artificial neural networks have the advantages such as learning, adaptation, fault-tolerance, parallelism and generalization. This paper mainly intends to offer a novel method for finding a solution of a fuzzy equation that supposedly has a real solution. For this scope, we applied an architecture of fuzzy neural networks such that the corresponding connection weights are real numbers. The ...

Journal: :SIAM J. Financial Math. 2017
Justin A. Sirignano Konstantinos Spiliopoulos

We consider stochastic gradient descent for continuous-time models. Traditional approaches for the statistical estimation of continuous-time models, such as batch optimization, can be impractical for large datasets where observations occur over a long period of time. Stochastic gradient descent provides a computationally efficient method for such statistical learning problems. The stochastic gr...

Journal: :Foundations of Computational Mathematics 2008
Yiming Ying Massimiliano Pontil

This paper considers the least-square online gradient descent algorithm in a reproducing kernel Hilbert space (RKHS) without explicit regularization. We present a novel capacity independent approach to derive error bounds and convergence results for this algorithm. We show that, although the algorithm does not involve an explicit RKHS regularization term, choosing the step sizes appropriately c...

Journal: :CoRR 2017
Zhen Wang Yuan-Hai Shao Lan Bai Li-Ming Liu Nai-Yang Deng

Stochastic gradient descent algorithm has been successfully applied on support vector machines (called PEGASOS) for many classification problems. In this paper, stochastic gradient descent algorithm is investigated to twin support vector machines for classification. Compared with PEGASOS, the proposed stochastic gradient twin support vector machines (SGTSVM) is insensitive on stochastic samplin...

Journal: :Journal of Machine Learning Research 2009
Antoine Bordes Léon Bottou Patrick Gallinari

The SGD-QN algorithm is a stochastic gradient descent algorithm that makes careful use of secondorder information and splits the parameter update into independently scheduled components. Thanks to this design, SGD-QN iterates nearly as fast as a first-order stochastic gradient descent but requires less iterations to achieve the same accuracy. This algorithm won the “Wild Track” of the first PAS...

2017
Saghir Ahmad Akhtar Kalam

Abstract: The existing literature predominantly concentrates on the utilization of the gradient descent algorithm for control systems’ design in power systems for stability enhancement. In this paper, various flavors of the Conjugate Gradient (CG) algorithm have been employed to design the online neuro-fuzzy linearization-based adaptive control strategy for Line Commutated Converters’ (LCC) Hig...

Journal: :journal of advances in computer research 2012
ahmad jafarian safa measoomy nia raheleh jafari

artificial neural networks have the advantages such as learning, adaptation, fault-tolerance, parallelism and generalization. this paper mainly intends to offer a novel method for finding a solution of a fuzzy equation that supposedly has a real solution. for this scope, we applied an architecture of fuzzy neural networks such that the corresponding connection weights are real numbers. the sugg...

Journal: :journal of ai and data mining 2015
m. m. fateh s. azargoshasb

this paper presents a discrete-time robust control for electrically driven robot manipulators in the task space. a novel discrete-time model-free control law is proposed by employing an adaptive fuzzy estimator for the compensation of the uncertainty including model uncertainty, external disturbances and discretization error. parameters of the fuzzy estimator are adapted to minimize the estimat...

Abstract In this paper, a novel technique based on fuzzy method is presented for chaotic nonlinear time series prediction. Fuzzy approach with the gradient learning algorithm and methods constitutes the main components of this method. This learning process in this method is similar to conventional gradient descent learning process, except that the input patterns and parameters are stored in mem...

2018
Rahul Kidambi Praneeth Netrapalli Prateek Jain Sham M. Kakade

Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov’s accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). Rigorously speaking, “fast gradient” methods have provable improvements over gradient descent...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید