نتایج جستجو برای: training algorithms

تعداد نتایج: 629109  

1998
I Fischer M R Berthold

| Graph transformations ooer a unifying framework to formalize Neural Networks together with their corresponding training algorithms. It is straightforward to describe also topology changing training algorithms with the help of these transformations. One of the beneets using this formal framework is the support for proving properties of the training algorithms. A training algorithm for Proba-bi...

Journal: :Soft Comput. 2015
Michalis Mavrovouniotis Shengxiang Yang

Feed-forward neural networks are commonly used for pattern classification. The classification accuracy of feed-forward neural networks depends on the configuration selected and the training process. Once the architecture of the network is decided, training algorithms, usually gradient descent techniques, are used to determine the connection weights of the feed-forward neural network. However, g...

Journal: :CoRR 2014
Hindayati Mustafidah Sri Hartati Retantyo Wardoyo Agus Harjoko

There are several training algorithms for back propagation method in neural network. Not all of these algorithms have the same accuracy level demonstrated through the percentage level of suitability in recognizing patterns in the data. In this research tested 12 training algorithms specifically in recognize data patterns of test validity. The basic network parameters used are the maximum allowa...

2001
Udo Seiffert

Multiple Layer Perceptron networks trained with backpropagation algorithm are very frequently used to solve a wide variety of real-world problems. Usually a gradient descent algorithm is used to adapt the weights based on a comparison between the desired and actual network response to a given input stimulus. All training pairs, each consisting of input vector and desired output vector, are form...

Journal: :IEEE transactions on neural networks 1990
Stephen P. Luttrell

A novel derivation is presented of T. Kohonen's topographic mapping training algorithm (Self-Organization and Associative Memory, 1984), based upon an extension of the Linde-Buzo-Gray (LBG) algorithm for vector quantizer design. Thus a vector quantizer is designed by minimizing an L(2) reconstruction distortion measure, including an additional contribution from the effect of code noise which co...

Journal: :Int. J. Intell. Syst. 2004
Fangming Zhu Steven Guan

Incremental training has been used for GA-based classifiers in a dynamic environment where training samples or new attributes/classes become available over time. In this paper, ordered incremental genetic algorithms (OIGAs) are proposed to address the incremental training of input attributes for classifiers. Rather than learning input attributes in batch as with normal GAs, OIGAs learn input at...

1965
Zhengjun Pan

The Neocognitron, inspired by the mammalian visual system, is a complex neural network with numerous parameters and weights which should be trained in order to utilise it for pattern recognition. However, it is not easy to optimise these parameters and weights by gradient decent algorithms. In this paper, we present a staged training approach using evolutionary algorithms. The experiments demon...

2011
H. Yu

Second order algorithms are very efficient for neural network training because of their fast convergence. In traditional Implementations of second order algorithms [Hagan and Menhaj 1994], Jacobian matrix is calculated and stored, which may cause memory limitation problems when training large-sized patterns. In this paper, the proposed computation is introduced to solve the memory limitation pr...

2005
V. P. Plagianakos

In this work differential evolution strategies are applied in neural networks with integer weights training. These strategies have been introduced by Storn and Price [Journal of Global Optimization, 11, pp. 341–359, 1997]. Integer weight neural networks are better suited for hardware implementation as compared with their real weight analogous. Our intention is to give a broad picture of the beh...

Journal: :IEEE transactions on neural networks 1991
Richard P. Brent

An algorithm that is faster than back-propagation and for which it is not necessary to specify the number of hidden units in advance is described. The relationship with other fast pattern-recognition algorithms, such as algorithms based on k-d trees, is discussed. The algorithm has been implemented and tested on artificial problems, such as the parity problem, and on real problems arising in sp...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید