A high capacity incremental and local learning algorithm for attractor neural networks

نویسنده

  • Amos Storkey
چکیده

Attractor neural networks such as the Hop eld network can be trained by a number of algorithms. The simplest of these methods is the Hebb rule which is strictly local (the weight of the synapse depends only on the activation of the two neurons it connects) and incremental (adding a new memory can be done knowing only the old weight matrix and not the actual patterns previously stored). Both of these properties are highly desirable. However it has been shown that the Hebbian network has a low absolute capacity of n=(2 ln n) where n is the total number of neurons. This capacity can be increased to n by use of the pseudo-inverse rule. However this is neither local nor incremental. At best the pseudo-inverse weight matrix can be generated by local (not strictly local) and non-incremental limit processes. The question addressed by this paper is, 'Can the capacity of the Hebbian rule be increased without losing locality or incrementality?' Here a new algorithm is proposed. This algorithm is local but not strictly local: The weight depends on both the activation and local elds of the two neurons that it connects. It is immediate: The learning process is one shot rather than a limit process. It is also incremental. In addition it has an absolute capacity signi cantly higher than that of the Hebbian method: n=p2 lnn In this report the new learning method is introduced. The relationship between the new rule, the Hebbian rule and the pseudo-inverse is given, the absolute capacity of the learning algorithm is calculated rigourously, using a simple probabilistic approach. Complicated large deviation calculations are avoided by a judicious use of an exponential upper bound. Simulations are provided to show that this calculation does indeed provide a good measure of the capacity for nite network sizes. Chapter

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Palimpsest Memories: a New High-capacity Forgetful Learning Rule for Hoppeld Networks

Preprint Abstract Palimpsest or forgetful learning rules for attractor neural networks do not suuer from catastrophic forgetting. Instead they selectively forget older memories in order to store new patterns. Standard palimpsest learning algorithms have a capacity of up to 0:05n, where n is the size of the network. Here a new learning rule is introduced. This rule is local and incremental. It i...

متن کامل

Novel Radial Basis Function Neural Networks based on Probabilistic Evolutionary and Gaussian Mixture Model for Satellites Optimum Selection

In this study, two novel learning algorithms have been applied on Radial Basis Function Neural Network (RBFNN) to approximate the functions with high non-linear order. The Probabilistic Evolutionary (PE) and Gaussian Mixture Model (GMM) techniques are proposed to significantly minimize the error functions. The main idea is concerning the various strategies to optimize the procedure of Gradient ...

متن کامل

A Differential Evolution and Spatial Distribution based Local Search for Training Fuzzy Wavelet Neural Network

Abstract   Many parameter-tuning algorithms have been proposed for training Fuzzy Wavelet Neural Networks (FWNNs). Absence of appropriate structure, convergence to local optima and low speed in learning algorithms are deficiencies of FWNNs in previous studies. In this paper, a Memetic Algorithm (MA) is introduced to train FWNN for addressing aforementioned learning lacks. Differential Evolution...

متن کامل

A Hybrid Framework for Building an Efficient Incremental Intrusion Detection System

In this paper, a boosting-based incremental hybrid intrusion detection system is introduced. This system combines incremental misuse detection and incremental anomaly detection. We use boosting ensemble of weak classifiers to implement misuse intrusion detection system. It can identify new classes types of intrusions that do not exist in the training dataset for incremental misuse detection. As...

متن کامل

An attractor neural network architecture with an ultra high information capacity: numerical results

Attractor neural network is an important theoretical scenario for modeling memory function in the hippocampus and in the cortex. In these models, memories are stored in the plastic recurrent connections of neural populations in the form of “attractor states”. The maximal information capacity for conventional abstract attractor networks with unconstrained connections is 2 bits/synapse. However, ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007