XOR and backpropagation learning: in and out of the chaos?

نویسندگان

  • Koen Bertels
  • Luc Neuberg
  • Stamatis Vassiliadis
  • Gerald G. Pechanek
چکیده

In this paper, we investigate the dynamic behavior of a backpropagation neural network while learning the XOR-boolean function. It has been shown that the backpropagation algorithm exhibits chaotic behavior and this implies an highly irregular and virtually unpredictable evolution. We study the chaotic behavior as learning progresses. Our investigation indicates that chaos appears to diminish as the neural network learns to produce the correct output. It is also observed that for certain values of the learning rate parameter the network times out and it appears as it may not arrive at producing the correct output.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A look inside the learning process of neural networks

O ne of the best-known results of the sciences of complexity is that complex systems learn on the edge of chaos, by which is meant that both chaotic and orderly states coexist and that the system remains close to this borderline and may switch from one state to the other. In this article, we take a look inside the learning process of neural networks, and we more specifically focus on the role o...

متن کامل

Chaos/Complexity Theory and Education

Sciences exist to demonstrate the fundamental order underlying nature. Chaos/complexity theory is a novel and amazing field of scientific inquiry. Notions of our everyday experiences are somehow in connection to the laws of nature through chaos/complexity theory’s concerns with the relationships between simplicity and complexity, between orderliness and randomness (Retrieved from http://www.inc...

متن کامل

Fig. 4a. Error Convergence of the Encoder Networks with Log Error Fig. 4b. Error Convergence for the Encoder Networks with Square Error

We describe the Alopex algorithm as a universal learning algorithm for neural networks. The algorithm is stochastic and it can be used for learning in networks of any topology, including those with feedback. The neurons could contain any transfer function and the learning could involve minimization of any error measure. The efficacy of the algorithm is investigated by applying it on multilayer ...

متن کامل

Comparison of optimized backpropagation algorithms

Backpropagation is one of the most famous training algorithms for multilayer perceptrons. Unfortunately it can be very slow for practical applications. Over the last years many improvement strategies have been developed to speed up backpropagation. It’s very difficult to compare these different techniques, because most of them have been tested on various specific data sets. Most of the reported...

متن کامل

Error-backpropagation in temporally encoded networks of spiking neurons

For a network of spiking neurons that encodes information in the timing of individual spike-times, we derive a supervised learning rule, SpikeProp, akin to traditional error-backpropagation. With this algorithm, we demonstrate how networks of spiking neurons with biologically reasonable action potentials can perform complex non-linear classification in fast temporal coding just as well as rate-...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995