Catastrophic Interference is Eliminated in Pretrained Networks
نویسندگان
چکیده
When modeling strictly sequential experimental memory tasks, such as serial list learning, connectionist networks appear to experience excessive retroactive interference, known as catastrophic interference (McCloskey & Cohen,1989; Ratcliff, 1990). The main cause of this interference is overlap among representations at the hidden unit layer (French, 1991; Hetherington,1991; Murre, 1992). This can be alleviated by constraining the number of hidden units allocated to representing each item, thus reducing overlap and interference (French, 1991; Kruschke, 1992). When human subjects perform a laboratory memory experiment, they arrive with a wealth of prior knowledge that is relevant to performing the task. If a network is given the benefit of relevant prior knowledge, the representation of new items is constrained naturally, so that a sequential task involving novel items can be performed with little interference. Three laboratory memory experiments (ABA free recall, serial list, and ABA paired-associate learning) are used to show that little or no interference is found in networks that have been pretrained with a simple and relevant knowledge base. Thus, catastrophic interference is eliminated when critical aspects of simulations are made to be more analogous to the corresponding human situation.
منابع مشابه
Catastrophic interference in connectionist networks
Introduction Catastrophic forgetting vs. normal forgetting Measures of catastrophic interference Solutions to the problem Rehearsal and pseudorehearsal Other techniques for alleviating catastrophic forgetting in neural networks Summary
متن کاملCatastrophic Interference in Connectionist Networks: Can It Be Predicted, Can It Be Prevented?
Catastrophic forgetting occurs when connectionist networks learn new information, and by so doing, forget all previously learned information. This workshop focused primarily on the causes of catastrophic interference, the techniques that have been developed to reduce it, the effect of these techniques on the networks' ability to generalize, and the degree to which prediction of catastrophic for...
متن کاملExtraction of Patterns from a Hippocampal Network Using Chaotic Recall
In neural networks, when new patterns are learned by a network, the new information radically interferes with previously stored patterns. This drawback is called catastrophic forgetting or catastrophic interference. We have already proposed a biologically inspired dual-network memory model which can reduce catastrophic interference. Although two distinct networks of the model correspond to the ...
متن کاملMeaningful Representations Prevent Catastrophic Interference
Artificial Neural Networks (ANNs) attempt to mimic human neural networks in order to perform tasks. In order to do this, tasks need to be represented in ways that the network understands. In ANNs these representations are often arbitrary, whereas in humans it seems that these representations are often meaningful. This article shows how using more meaningful representations in ANNs can be very b...
متن کاملAvoiding Catastrophic Forgetting by a Dual-Network Memory Model Using a Chaotic Neural Network
In neural networks, when new patterns are learned by a network, the new information radically interferes with previously stored patterns. This drawback is called catastrophic forgetting or catastrophic interference. In this paper, we propose a biologically inspired neural network model which overcomes this problem. The proposed model consists of two distinct networks: one is a Hopfield type of ...
متن کامل