Learning Input and Recurrent Weight Matrices in Echo State Networks
نویسندگان
چکیده
The traditional echo state network (ESN) is a special type of a temporally deep model, the recurrent network (RNN), which carefully designs the recurrent matrix and fixes both the recurrent and input matrices in the RNN. The ESN also adopts the linear output (or readout) units to simplify the leanring of the only output matrix in the RNN. In this paper, we devise a special technique that takes advantage of the linearity in the output units in the ESN to learn the input and recurrent matrices, not carried on earlier ESNs due to the well-known difficulty of their learning. Compared with the technique of BackProp Through Time (BPTT) in learning the general RNNs, our proposed technique makes use of the linearity in the output units to provide constraints among various matrices in the RNN, enabling the computation of the gradients as the learning signal in an analytical form instead of by recursion as in the BPTT. Experimental results on phone state classification show that learning either or both the input and recurrent matrices in the ESN is superior to the traditional ESN without learning them, especially when longer time steps are used in analytically computing the gradients.
منابع مشابه
Echo State networks and Neural network Ensembles to predict Sunspots activity
Echo state networks (ESN) and ensembles of neural networks are developed for the prediction of the monthly sunspots series. Through numerical evaluation on this benchmark data set it has been shown that the feedback ESN models outperform feedforward MLP. Furthermore, it is shown that median fusion lead to robust predictors, and even can improve the prediction accuracy of the best individual pre...
متن کاملInput-Anticipating Critical Reservoirs Show Power Law Forgetting of Unexpected Input Events
Usually reservoir computing shows an exponential memory decay. This letter investigates under which circumstances echo state networks can show a power law forgetting. That means traces of earlier events can be found in the reservoir for very long time spans. Such a setting requires critical connectivity exactly at the limit of what is permissible according to the echo state condition. However, ...
متن کاملDesign Strategies for Weight Matrices of Echo State Networks
This article develops approaches to generate dynamical reservoirs of echo state networks with desired properties reducing the amount of randomness. It is possible to create weight matrices with a predefined singular value spectrum. The procedure guarantees stability (echo state property). We prove the minimization of the impact of noise on the training process. The resulting reservoir types are...
متن کاملDistributed Sequence Memory of Multidimensional Inputs in Recurrent Networks
Recurrent neural networks (RNNs) have drawn interest from machine learning researchers because of their effectiveness at preserving past inputs for time-varying data processing tasks. To understand the success and limitations of RNNs, it is critical that we advance our analysis of their fundamental memory properties. We focus on echo state networks (ESNs), which are RNNs with simple memoryless ...
متن کاملMultiobjective Optimization of Echo State Networks for multiple motor pattern learning
Echo State Networks are a special class of recurrent neural networks, that are well-suited for attractorbased learning of motor patterns. Using structural multiobjective optimization, the trade-off between network size and accuracy can be identified. This allows to choose a feasible model capacity for a follow-up full-weight optimization. It is shown to produce small and efficient networks, tha...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1311.2987 شماره
صفحات -
تاریخ انتشار 2013