نتایج جستجو برای: backpropagation

تعداد نتایج: 7478  

2008
David E. Rumelhart Richard Durbin Richard Golden Yves Chauvin

Since the publication of the PDP volumes in 1986,1 learning by backpropagation has become the most popular method of training neural networks. The reason for the popularity is the underlying simplicity and relative power of the algorithm. Its power derives from the fact that, unlike its precursors, the perceptron learning rule and the Widrow-Hoff learning rule, it can be employed for training n...

1994
A. Tenhagen

| In classic backpropagation nets, as introduced by Rumelhart et al. 1], the weights are modiied according to the method of steepest descent. The goal of this weight modiication is to minimise the error in net-outputs for a given training set. Basing upon Jacobs' work 2], we point out drawbacks of steepest descent and suggest improvements on it. These yield a backpropagation net, which adjusts ...

2004
Kunihiko Fukushima Soo-Young Lee Xin Yao

We consider application of neural associative memories to chemical image recognition. Chemical image recognition is identification of substance using chemical sensors' data. The primary advantage of associative memories as compared with feed-forward neural networks is highspeed learning. We have made experiments on odour recognition using hetero-associative and modular autoassociative memories....

2008
Vesna Ranković Vesna M. Ranković Ilija Ž. Nikolić

Nonlinear system identification via Feedforward Neural Networks (FNN) and Digital Recurrent Network (DRN) is studied in this paper. The standard backpropagation algorithm is used to train the FNN. A dynamic backpropagation algorithm is employed to adapt weights and biases of the DRN. The neural networks are trained using the identified error between the model’s output and plant’s output. Result...

Journal: :IEEE transactions on neural networks 2002
Xinghuo Yu Mehmet Önder Efe Okyay Kaynak

A general backpropagation algorithm is proposed for feedforward neural network learning with time varying inputs. The Lyapunov function approach is used to rigorously analyze the convergence of weights, with the use of the algorithm, toward minima of the error function. Sufficient conditions to guarantee the convergence of weights for time varying inputs are derived. It is shown that most commo...

2010
Daniel P. Campbell Daniel A. Cook

This paper describes the use of graphics processors to accelerate the backpropagation method of forming images in Synthetic Aperture Sonar (SAS) systems. SAS systems coherently process multiple pulses to provide a higher level of detail in the resolved image than is otherwise possible with a single pulse . Several models are available to resolve an image from the pulse return data; the backprop...

2016
Thomas Reps Naveen Neelakandan

This lecture discusses the relationship between automatic differentiation and backpropagation. Automatic differentiation (AD) is a technique that takes an implementation of a numerical function f (computed using floating-point numbers) and creates an implementation of f . We explain several techniques for performing AD. For forward-mode AD, we give an explicit transformation of the program, as ...

2015
Yaroslav Ganin Victor S. Lempitsky

Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on lar...

Journal: :CoRR 2017
Avi Pfeffer

Probabilistic modeling enables combining domain knowledge with learning from data, thereby supporting learning from fewer training instances than purely data-driven methods. However, learning probabilistic models is difficult and has not achieved the level of performance of methods such as deep neural networks on many tasks. In this paper, we attempt to address this issue by presenting a method...

2003
Petr Krupanský Petr Pivoñka Jiri Dohnal

A control of real processes requires different approach to neural network learning. The presented modification of backpropagation learning algorithm changes a meaning of learning constants. A base of modification is stability condition of learning dynamics.

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید