Accurate online training of dynamical spiking neural networks through Forward Propagation Through Time
نویسندگان
چکیده
With recent advances in learning algorithms, recurrent networks of spiking neurons are achieving performance that is competitive with vanilla neural networks. However, these algorithms limited to small simple and modest-length temporal sequences, as they impose high memory requirements, have difficulty training complex neuron models incompatible online learning. Here, we show how the recently developed Forward-Propagation Through Time (FPTT) combined novel liquid time-constant resolves limitations. Applying FPTT such neurons, demonstrate exceedingly long sequences while outperforming current methods approaching or offline on classification tasks. The efficiency robustness enable us directly train a deep performant network for joint object localization recognition, demonstrating ability large-scale dynamic architectures. Memory efficient without compromising accuracy an open challenge neuromorphic computing. Yin colleagues consisting so-called using algorithm called allows state-of-the-art at reduced computational cost compared existing approaches.
منابع مشابه
Off-Chip Training of Analog Hardware Feed- Forward Neural Networks through Hyper- Floating Resilient Propagation
Any attempt of implementing abstract functional relationships on silicon is likely to considerably increase the complexity of analog circuits for Feed-Forward Networks (FFNs). We developed an alternative model that can make direct use of the native computational properties owned by elementary electronic devices. A practical framework is described to train such analog FFNs “offchip”. This is esp...
متن کاملCausal Back Propagation through Time for Locally Recurrent Neural Networks
This paper concerns dynamic neural networks for signal processing: architectural issues are considered but the paper focuses on learning algorithms that work on-line. Locally recurrent neural networks, namely MLP with IIR synapses and generalization of Local Feedback MultiLayered Networks (LF MLN), are compared to more traditional neural networks, i.e. static MLP with input and/or output buffer...
متن کاملUncertainty propagation through deep neural networks
In order to improve the ASR performance in noisy environments, distorted speech is typically pre-processed by a speech enhancement algorithm, which usually results in a speech estimate containing residual noise and distortion. We may also have some measures of uncertainty or variance of the estimate. Uncertainty decoding is a framework that utilizes this knowledge of uncertainty in the input fe...
متن کاملVariable Binding through Assemblies in Spiking Neural Networks
We propose a model for the binding of variables to concrete fillers in the human brain. The model is based on recent experimental data about corresponding neural processes in humans. First, electrode recordings from the human brain suggest that concepts are represented in the medial temporal lobe (MTL) through sparse sets of neurons (assemblies). Second, fMRI recordings from the human brain sug...
متن کاملAdversarial Training for Probabilistic Spiking Neural Networks
Classifiers trained using conventional empirical risk minimization or maximum likelihood methods are known to suffer dramatic performance degradations when tested over examples adversarially selected based on knowledge of the classifier’s decision rule. Due to the prominence of Artificial Neural Networks (ANNs) as classifiers, their sensitivity to adversarial examples, as well as robust trainin...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Nature Machine Intelligence
سال: 2023
ISSN: ['2522-5839']
DOI: https://doi.org/10.1038/s42256-023-00650-4