On-line learning and generalisation in coupled perceptrons

نویسنده

  • D Bollé
چکیده

We study supervised learning and generalisation in coupled perceptrons trained on-line using two learning scenarios. In the first scenario the teacher and the student are independent networks and both are represented by an Ashkin-Teller perceptron. In the second scenario the student and the teacher are simple perceptrons but are coupled by an Ashkin-Teller type four-neuron interaction term. Expressions for the generalisation error and the learning curves are derived for various learning algorithms. The analytic results find excellent confirmation in numerical simulations. PACS numbers: 87.18.Sn, 05.20.-y, 87.10.+e On-line learning and generalisation in coupled perceptrons 2

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Parallel strategy for optimal learning in perceptrons

Abstract. We developed a parallel strategy for learning optimally specific realizable rules by perceptrons, in an on-line learning scenario. Our result is a generalisation of the Caticha-Kinouchi (CK) algorithm developed for learning a perceptron with a synaptic vector drawn from a uniform distribution over the N -dimensional sphere, so called the typical case. Our method outperforms the CK alg...

متن کامل

On the Generalisation Ability of Diluted Perceptrons

A linearly separable Boolean function is learned by a diluted perceptron with optimal stability. A diierent level of dilution is allowed for teacher and student perceptron. The learning algorithms used were the optimal annealed dilution and Hebbian dilution. The generalisation ability, i.e. the probability to recognize a pattern which has not been learned before, is calculated in replica symmetry.

متن کامل

Local linear perceptrons for classification

A structure composed of local linear perceptrons for approximating global class discriminants is investigated. Such local linear models may be combined in a cooperative or competitive way. In the cooperative model, a weighted sum of the outputs of the local perceptrons is computed where the weight is a function of the distance between the input and the position of the local perceptron. In the c...

متن کامل

Non - Deterministic Learning Dynamics in LargeNeural Networks due to Structural Data

We study the dynamics of on-line learning in large (N ! 1) perceptrons, for the case of training sets with a structural O(N 0) bias of the input vectors, by deriving exact and closed macroscopic dynamical laws using non-equilibrium statistical mechanical tools. In sharp contrast to the more conventional theories developed for homogeneously distributed or only weakly biased data, these laws are ...

متن کامل

Ensemble learning of linear perceptrons; Online learning theory

Abstract Within the framework of on-line learning, we study the generalization error of an ensemble learning machine learning from a linear teacher perceptron. The generalization error achieved by an ensemble of linear perceptrons having homogeneous or inhomogeneous initial weight vectors is precisely calculated at the thermodynamic limit of a large number of input elements and shows rich behav...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001