Pii: S0893-6080(97)00012-9
نویسنده
چکیده
Kohonen’s learning vector quantization (LVQ)is modifiedby attributingtrainingcountersto eachneuron, whichrecordits trainingstatistics.Duringtraining,thisallowsfor dynamicself-allocationof theneuronsto classes.In the classificationstage trainingcountersprovidean estimateof the reliabilityof classificationof the singleneurons, whichcan be exploitedto obtaina substantiallyhigherpurity of classi$cation.Themethodturnsout to be especially valuablein thepresenceof considerableoverlapsamongclassdistributionsin thepattem space.Theresultsof a typical applicationto highenergyelementaq particlephysicsare discussedin detail. 01997 ElsevierScienceLtd. Keywords-Learning vector quantization, Neural network architecture, Training, Classification, High energy physics, Elementary particle physics.
منابع مشابه
Learning Vector Quantization with Training Count (LVQTC)
Kohonen's learning vector quantization (LVQ) is modified by attributing training counters to each neuron, which record its training statistics. During training, this allows for dynamic self-allocation of the neurons to classes. In the classification stage training counters provide an estimate of the reliability of classification of the single neurons, which can be exploited to obtain a substant...
متن کاملPii: S0893-6080(00)00062-9
This article gives an overview of the different functional brain imaging methods, the kinds of questions these methods try to address and some of the questions associated with functional neuroimaging data for which neural modeling must be employed to provide reasonable answers. q 2000 Published by Elsevier Science Ltd.
متن کاملOn the piecewise analysis of networks of linear threshold neurons
The computational abilities of recurrent networks of neurons with a linear activation function above threshold are analyzed. These networks selectively realise a linear mapping of their input. Using this property, the dynamics as well as the number and the stability of stationary states can be investigated. The important property of the boundedness of neural activities can be guaranteed by glob...
متن کاملRegularization with a Pruning Prior
We investigate the use of a regularization prior and its pruning properties. We illustrate the behavior of this prior by conducting analyses both using a Bayesian framework and with the generalization method, on a simple toy problem. Results are thoroughly compared with those obtained with a traditional weight decay. Copyright 1997 Elsevier Science Ltd.
متن کاملPrecision Requirements for Closed-Loop Kinematic Robotic Control Using Linear Local Mappings
Neural networks are approximation techniques that can be characterized by adaptability rather than by precision. For feedback systems, high precision can still be acquired in presence of errors. Within a general iterative framework of closed-loop kinematic robotic control using linear local modeling, the inverse Jacobian matrix error and the maximum length of the displacement for which the line...
متن کامل