Learning in large-scale spiking neural networks
نویسندگان
چکیده
Learning is central to the exploration of intelligence. Psychology and machine learning provide high-level explanations of how rational agents learn. Neuroscience provides lowlevel descriptions of how the brain changes as a result of learning. This thesis attempts to bridge the gap between these two levels of description by solving problems using machine learning ideas, implemented in biologically plausible spiking neural networks with experimentally supported learning rules. We present three novel neural models that contribute to the understanding of how the brain might solve the three main problems posed by machine learning: supervised learning, in which the rational agent has a fine-grained feedback signal, reinforcement learning, in which the agent gets sparse feedback, and unsupervised learning, in which the agents has no explicit environmental feedback. In supervised learning, we argue that previous models of supervised learning in spiking neural networks solve a problem that is less general than the supervised learning problem posed by machine learning. We use an existing learning rule to solve the general supervised learning problem with a spiking neural network. We show that the learning rule can be mapped onto the well-known backpropagation rule used in artificial neural networks. In reinforcement learning, we augment an existing model of the basal ganglia to implement a simple actor-critic model that has a direct mapping to brain areas. The model is used to recreate behavioural and neural results from an experimental study of rats performing a simple reinforcement learning task. In unsupervised learning, we show that the BCM rule, a common learning rule used in unsupervised learning with rate-based neurons, can be adapted to a spiking neural network. We recreate the effects of STDP, a learning rule with strict time dependencies, using BCM, which does not explicitly remember the times of previous spikes. The simulations suggest that BCM is a more general rule than STDP. Finally, we propose a novel learning rule that can be used in all three of these simulations. The existence of such a rule suggests that the three types of learning examined separately in machine learning may not be implemented with separate processes in the brain.
منابع مشابه
Biologically inspired neural networks for the control of embodied agents
This paper reviews models of neural networks suitable for the control of artificial intelligent agents interacting continuously with an environment. We first define the characteristics needed by those neural networks. We review several classes of neural models and compare them in respect of their suitability for embodied agent control. Among the classes of neural network models amenable to larg...
متن کاملImproving the Izhikevich Model Based on Rat Basolateral Amygdala and Hippocampus Neurons, and Recognizing Their Possible Firing Patterns
Introduction: Identifying the potential firing patterns following different brain regions under normal and abnormal conditions increases our understanding of events at the level of neural interactions in the brain. Furthermore, it is important to be capable of modeling the potential neural activities to build precise artificial neural networks. The Izhikevich model is one of the simplest biolog...
متن کاملChallenges for large-scale implementations of spiking neural networks on FPGAs
The last 50 years has witnessed considerable research in the area of neural networks resulting in a range of architectures, learning algorithms and demonstrative applications. A more recent research trend has focused on the biological plausibility of such networks as a closer abstraction to real neurons may offer improved performance in an adaptable, real-time environment. This poses considerab...
متن کاملRobustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms
Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and constru...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2011