An Analog VLSI Implementation of the Wake-Sleep Learning Algorithm Using Bi-Stable Synaptic Weights

نویسندگان

  • Guy Lipworth
  • Kyle McMillan
  • Timothy Horiuchi
  • Pamela Abshire
  • Timir Datta
  • Anshu Sarje
چکیده

Drawing on biological systems for their inspiration, typical supervised neural networks learn to classify features within a set of inputs through repetition. Here, we focus on using an auto-encoder network to memorize each item in a set of inputs rather than to classify them. We have simulated this network in MATLAB using the “Wake-Sleep” learning algorithm proposed by Hinton et al. [1] and demonstrated that the algorithm can be used successfully with binary synaptic weights trained in a bistable manner. Working from these simulations, we have designed and simulated a low power analog VLSI synapse circuit with analog but bi-stable weights that can implement the WakeSleep algorithm.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Spiking neuron network Helmholtz machine

An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in...

متن کامل

Analog implementation of a Kohonen map with on-chip learning

Kohonen maps are self-organizing neural networks that classify and quantify n-dimensional data into a one- or two-dimensional array of neurons. Most applications of Kohonen maps use simulations on conventional computers, eventually coupled to hardware accelerators or dedicated neural computers. The small number of different operations involved in the combined learning and classification process...

متن کامل

A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks

Typical methods for gradient descent in neural network learning involve calculation of derivatives based on a detailed knowledge of the network model. This requires extensive, time consuming calculations for each pattern presentation and high precision that makes it difficult to implement in VLSI. We present here a perturbation technique that measures, not calculates, the gradient. Since the te...

متن کامل

A Spike Based Learning Neuron in Analog VLSI

Many popular learning rules are formulated in terms of continuous, analog inputs and outputs. Biological systems, however, use action potentials, which are digital-amplitude events that encode analog information in the inter-event interval. Action-potential representations are now being used to advantage in neuromorphic VLSI systems as well. We report on a simple learning rule, based on the Ric...

متن کامل

Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks

Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008