Faster Learning for Dynamic Recurrent Backpropagation
نویسندگان
چکیده
The backpropagation learning algorithm for feedforward networks (Rumelhart et al. 1986) has recently been generalized to recurrent networks (Pineda 1989). The algorithm has been further generalized by Pearlmutter (1989) to recurrent networks that produce time-dependent trajectories. The latter method requires much more training time than the feedforward or static recurrent algorithms. Furthermore, the learning can be unstable and the asymptotic accuracy unacceptable for some problems. In this note, we report a modification of the delta weight update rule that significantly improves both the performance and the speed of the original Pearlmutter learning algorithm. Our modified updating rule, a variation on that originally proposed by Jacobs (1988), allows adaptable independent learning rates for individual parameters in the algorithm. The update rule for the ith weight, wi, is given by the delta-bar-delta rule:
منابع مشابه
Casual BackPropagation Through Time for Locally Recurrent Neural Networks
This paper concerns dynamic neural networks for signal processing: architectural issues are considered but the paper focuses on learning algorithms that work on-line. Locally recurrent neural networks, namely MLP with IIR synapses and generalization of Local Feedback MultiLayered Networks (LF MLN), are compared to more traditional neural networks, i.e. static MLP with input and/or output buffer...
متن کاملDRAFT OF July 20 , 1995 FOR IEEE TRANSACTIONS ON NEURAL NETWORKS 1 Gradient Calculations for Dynamic
| We survey learning algorithms for recurrent neural networks with hidden units, and put the various techniques into a common framework. We discuss xedpoint learning algorithms, namely recurrent backpropagation and deterministic Boltzmann Machines, and non-xedpoint algorithms , namely backpropagation through time, Elman's history cutoo, and Jordan's output feedback architecture. Forward propaga...
متن کاملLearning a Simple Recurrent Neural State Space Model to Behave like Chua's Double Scroll
In this short paper we present a simple discrete time autonomous neural state space model (recurrent network) that behaves like Chua's double scroll. The model is identi ed using Narendra's dynamic backpropagation procedure. Learning is done in `packets' of increasing time horizon.
متن کاملGreen's Function Method for Fast On-Line Learning Algorithm of Recurrent Neural Networks
The two well known learning algorithms of recurrent neural networks are the back-propagation (Rumelhart & el al., Werbos) and the forward propagation (Williams and Zipser). The main drawback of back-propagation is its off-line backward path in time for error cumulation. This violates the on-line requirement in many practical applications. Although the forward propagation algorithm can be used i...
متن کاملLearning Longer-term Dependencies in RNNs with Auxiliary Losses
We present a simple method to improve learning long-term dependencies in recurrent neural networks (RNNs) by introducing unsupervised auxiliary losses. These auxiliary losses force RNNs to either remember distant past or predict future, enabling truncated backpropagation through time (BPTT) to work on very long sequences. We experimented on sequences up to 16 000 tokens long and report faster t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Neural Computation
دوره 2 شماره
صفحات -
تاریخ انتشار 1990