نتایج جستجو برای: gradient descent algorithm
تعداد نتایج: 869527 فیلتر نتایج به سال:
2 Method 1 2.1 Optical Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2.2 Lucas Kanade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.3 Gradient Descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.4 Conjugate Gradient Descent . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.5 Newton’s Method . . . . . . ...
A fully adaptive normalized nonlinear complex-valued gradient descent (FANNCGD) learning algorithm for training nonlinear (neural) adaptive finite impulse response (FIR) filters is derived. First, a normalized nonlinear complex-valued gradient descent (NNCGD) algorithm is introduced. For rigour, the remainder of the Taylor series expansion of the instantaneous output error in the derivation of ...
a back propagation artificial neural network (bpann) is a well-known learning algorithmpredicated on a gradient descent method that minimizes the square error involving the networkoutput and the goal of output values. in this study, 261 gps/leveling and 8869 gravity intensityvalues of iran were selected, then the geoid with three methods “ellipsoidal stokes integral”,“bpann”, and “collocation” ...
We propose an Adaptive Stochastic Conjugate Gradient (ASCG) optimization algorithm for temporal medical image registration. This method combines the advantages of Conjugate Gradient (CG) method and Adaptive Stochastic Gradient Descent (ASGD) method. The main idea is that the search direction of ASGD is replaced by stochastic approximations of the conjugate gradient of the cost function. In addi...
Traditional learning algorithms with gradient descent based technique, such as back-propagation (BP) and its variant Levenberg-Marquardt (LM) have been widely used in the training of multilayer feedforward neural networks. The gradient descent based algorithm may converge usually slower than required time in training, since many iterative learning step are needed by such learning algorithm, and...
This article is concerned with the multiagent optimization problem. A distributed randomized gradient-free mirror descent (DRGFMD) method developed by introducing a oracle in scheme where non-Euclidean Bregman divergence used. The classical gradient generalized without using subgradient information of objective functions. proposed algorithms are first zeroth-order methods, which achieve an appr...
Stochastic gradient descent based algorithms are typically used as the general optimization tools for most deep learning models. A Restricted Boltzmann Machine (RBM) is a probabilistic generative model that can be stacked to construct deep architectures. For RBM with Bernoulli inputs, non-Euclidean algorithm such as stochastic spectral descent (SSD) has been specifically designed to speed up th...
An algorithm and associated strategy for solving polynomial systems within the optimization framework is presented. The algorithm and strategy are named, respectively, the penetrating gradient algorithm and the deepest descent strategy. The most prominent feature of penetrating gradient algorithm, after which it was named, is its ability to “see and penetrate through” the obstacles in error spa...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید