The Pennsylvania State University The Graduate School Department of Computer Science and Engineering EFFICIENT AND SCALABLE BIOLOGICALLY PLAUSIBLE SPIKING NEURAL NETWORKS WITH LEARNING APPLIED TO VISION

نویسندگان

  • Ankur Gupta
  • Lyle N. Long
  • Soundar R. T. Kumara
  • Robert T. Collins
  • William E. Higgins
  • John C. Collins
  • Raj Acharya
چکیده

concept of her. Collins and Jin [109] show that a ‘grandmother cell’ type representation could be information theoretically efficient provided it is accompanied by distributed coding type cells. Maass [110] shows that WTA is quite powerful compared to threshold and sigmoidal gates often used in traditional neural networks. It is shown that any Boolean function can be computed using a single k-WTA unit [110]. This is very interesting, as at least two-layered perceptron circuits are needed to compute complicated functions. They also showed that any continuous function can be approximated by a single soft WTA unit (A soft winner take all operation assumes values depending on the rank of corresponding input in linear order). Another advantage is the fact that approximate WTA computation can be done very fast (linear-size) in analog VLSI chips [111]. Thus, complex feed-forward multi-layered perceptron circuits can be replaced by a single competitive WTA stage leading to low power analog VLSI chips [110]. There have been many implementations of winner take all (WTA) computations in recurrent networks in the literature [112, 113]. Also there have been many analog VLSI implementations of these circuit [113, 114]. The WTA model implemented here is influenced by the WTA implementation on recurrent networks by Oster and Liu [113]. In their implementation, the neuron that receives spikes with the shortest inter-spike interval is the winner. But it is not 49 clear in their implementation how (starting from random weights) a new neuron can learn a new category. A modified version of winner take all (WTA) with Hebbian learning is implemented here to demonstrate how different neurons can learn different categories. WTA is applied on both the learning layers while training and is switched off while testing. The WTA is implemented as follows: 1) At every time step, find the post neuron with the least spike time difference tpost1tpost2. Note that these last two post-synaptic neuron spike times are easily available. This neuron is declared as the winner. 2) The winner inhibits the other neurons from firing by sending an inhibitory pulse to others. If the winner neuron has not learned any feature, it learns the new feature by the above Hebbian learning method. The neuron remains the winner unless another neuron has a lower spike time interval or a new image is presented. The above learning approach is followed except in the last (uppermost) learning layer where the winner is declared according to the supplied category/label information about the input image rather than the spike time difference. Thus the overall approach is like a semi-supervised approach, with unsupervised learning in the lower learning layer and supervised learning in the last learning layer. We are assuming that all membrane potentials are discharged at the onset of a stimulus. This can be achieved, for example, by a decay in the membrane potential. There have been several implementations of sparse coding schemes too. Földiák [105] shows how a layer of neurons can learn to sparse code by using Hebbian excitatory connections between the input and output units and anti-Hebbian learning among the output connections. Olshausen and Field [115] showed that minimizing an objective function of high sparseness and low reconstruction error on a set of natural images yields a set of basis functions similar to the Gabor like receptive fields of simple cells in primary visual cortex. One interesting study is by 50 Einhauser et al. [116], who developed a neural network model which could develop receptive field properties similar to the simple and complex cells found in visual cortex. The network could learn from natural stimuli obtained by mounting a camera to a cat’s head to approximate input to a cat’s visual system. Though, they did not use spiking neural network. 4.3 Neuronal and Synaptic Genesis Neurogenesis and synaptogenesis (birth of neurons or synapses) has been shown to occur in the brain in certain settings [117, 118]. It is also known that it does affect some learning and memory tasks [119-121]. Many studies have indicated that the brain is much more plastic than previously thought [122]. For example, brain scans of people who loose their limbs in accidents show that parts of the brain maps corresponding to the lost limb are taken over by the surrounding brain maps. This plasticity and change can be made possible by birth and death of neurons and connections. For efficient simulations it will be important to be able to add neurons and synapses, and to also remove them when necessary. We found that our network will benefit from modeling these processes. For illustration, we trained a network with the network architecture shown in the right plot of Figure 3-9 using our learning approach. The goal was to recognize handwritten digits from 0 to 9. More details about the network structure and training problem can be found in Section 6.4.1 Our focus here is on the first learning layer in the architecture which was trained in an unsupervised way. Initially the synaptic weights were set to random values in this layer. Figure 4-2 shows 2D-arrays of synapse values after training for this learning layer plotted as 8x8 projections onto the previous layer. The synaptic weights are plotted as intensities with white representing the highest synaptic strength and black the lowest. These arrays represent features that were learned. Note that some of them remained unchanged (remain random starting values) during training and could have as well been 51 removed from the simulation for efficiency. Or, better yet, we could have started with fewer synapses and added them as needed. This would ensure that unnecessary connections are not there and will save computer memory. Figure 4-2: Images of arrays of final synapse weights plotted as 8x8 projections to previous layer in a layer of network after learning. White represents the highest synaptic strength and black represents the lowest.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning large-scale heteroassociative memories in spiking neurons

Associative memories have been an active area of research over the last forty years (Willshaw et al., 1969; Kohonen, 1972; Hopfield, 1982) because they form a central component of many cognitive architectures (Pollack, 1988; Anderson & Lebiere, 1998). We focus specifically on associative memories that store associations between arbitrary pairs of neural states. When a noisy version of an input ...

متن کامل

Target Tracking Based on Virtual Grid in Wireless Sensor Networks

One of the most important and typical application of wireless sensor networks (WSNs) is target tracking. Although target tracking, can provide benefits for large-scale WSNs and organize them into clusters but tracking a moving target in cluster-based WSNs suffers a boundary problem. The main goal of this paper was to introduce an efficient and novel mobility management protocol namely Target Tr...

متن کامل

An Efficient Secret Sharing-based Storage System for Cloud-based Internet of Things

Internet of things (IoTs) is the newfound information architecture based on the internet that develops interactions between objects and services in a secure and reliable environment. As the availability of many smart devices rises, secure and scalable mass storage systems for aggregate data is required in IoTs applications. In this paper, we propose a new method for storing aggregate data in Io...

متن کامل

Semantic Constraint and QoS-Aware Large-Scale Web Service Composition

Service-oriented architecture facilitates the running time of interactions by using business integration on the networks. Currently, web services are considered as the best option to provide Internet services. Due to an increasing number of Web users and the complexity of users’ queries, simple and atomic services are not able to meet the needs of users; and to provide complex services, it requ...

متن کامل

Training Spiking Deep Networks for Neuromorphic Hardware

We describe a method to train spiking deep networks that can be run using leaky integrate-and-fire (LIF) neurons, achieving state-of-the-art results for spiking LIF networks on five datasets, including the large ImageNet ILSVRC-2012 benchmark. Our method for transforming deep artificial neural networks into spiking networks is scalable and works with a wide range of neural nonlinearities. We ac...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010