نتایج جستجو برای: gradient descent algorithm

تعداد نتایج: 869527  

2009
Jean-Antoine Désidéri

In a previous report [3], a methodology for the numerical treatment of a two-objective optimization problem, possibly subject to equality constraints, was proposed. The method was devised to be adapted to cases where an initial design-point is known and such that one of the two disciplines, considered to be preponderant, or fragile, and said to be the primary discipline, achieves a local or glo...

Journal: Iranian Economic Review 2004

Applying nonlinear models to estimation and forecasting economic models are now becoming more common, thanks to advances in computing technology. Artificial Neural Networks (ANN) models, which are nonlinear local optimizer models, have proven successful in forecasting economic variables. Most ANN models applied in Economics use the gradient descent method as their learning algorithm. However, t...

2007
Yiming Ying Massimiliano Pontil

This paper considers the least-square online gradient descent algorithm in a reproducing kernel Hilbert space (RKHS) without an explicit regularization term. We present a novel capacity independent approach to derive error bounds and convergence results for this algorithm. The essential element in our analysis is the interplay between the generalization error and a weighted cumulative error whi...

Journal: :European Journal of Operational Research 2020

Journal: :Proceedings of the AAAI Conference on Artificial Intelligence 2020

Journal: :International Journal of Advanced Computer Science and Applications 2017

Journal: :iranian economic review 0

applying nonlinear models to estimation and forecasting economic models are now becoming more common, thanks to advances in computing technology. artificial neural networks (ann) models, which are nonlinear local optimizer models, have proven successful in forecasting economic variables. most ann models applied in economics use the gradient descent method as their learning algorithm. however, t...

Journal: :Journal of Computational and Applied Mathematics 2009

1995
Kim W. C. Ku M. W. Mak W. C. Siu

Recurrent neural networks (RNNs), with the capability of dealing with spatio-temporal relationship, are more complex than feed-forward neural networks. Training of RNNs by gradient descent methods becomes more dii-cult. Therefore, another training method, which uses cellular genetic algorithms, is proposed. In this paper, the performance of training by a gradient descent method is compared with...

Journal: :Acta Crystallographica Section A Foundations and Advances 2019

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید