نتایج جستجو برای: training algorithms

تعداد نتایج: 629109  

2009
Dumitru Erhan Pierre-Antoine Manzagol Yoshua Bengio Samy Bengio Pascal Vincent

Whereas theoretical work suggests that deep architectures might be more efficient at representing highly-varying functions, training deep architectures was unsuccessful until the recent advent of algorithms based on unsupervised pretraining. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this difficult learning problem. Answering th...

1999
William W. Cohen

Twenty-two decision tree, nine statistical, and two neural network algorithms are compared on thirty-two datasets in terms of classi cation accuracy, training time, and (in the case of trees) number of leaves. Classi cation accuracy is measured by mean error rate and mean rank of error rate. Both criteria place a statistical, spline-based, algorithm called Polyclass at the top, although it is n...

2002
Rodrigo C. de Lamare Raimundo Sampaio Neto

In this paper we investigate the use of adaptive minimum bit error rate (MBER) Gradient-Newton algorithms in the design of linear multiuser receivers (MUD) for DS-CDMA systems. The proposed algorithms approximate the bit error rate (BER) from training data using linear multiuser detection structures. A comparative analysis of linear MUDs, employing minimum mean squared error (MMSE), previously ...

2003
Tae-Hoon Kim Jiang Li Michael T. Manry

Two effective neural network training algorithms are output weight optimization – hidden weight optimization and conjugate gradient. The former performs better on correlated data, and the latter performs better on random data. Based on these observations and others, we develop a procedure to test general neural network training algorithms. Since good neural network algorithm should perform well...

2008
Stephan Chalup Frederic Maire

Hill climbing algorithms can train neural control systems for adaptive agents. They are an alternative to gradient descent algorithms especially if neural networks with non-layered topology or non-differentiable activation function are used, or if the task is not suitable for backpropagation training. This paper describes three variants of generic hill climbing algorithms which together can tra...

2008
Vreixo Formoso Fidel Cacheda Víctor Carneiro

In this work we present a series of collaborative filtering algorithms known for their simplicity and efficiency. The efficiency of this algorithm was compared with that of other more representative collaborative filtering algorithms. The results demonstrate that the response times are better than those of the rest (at least two orders of magnitude), in the training as well as when making predi...

2004
Selahattin SAYIL

The performance of a CMAC neural network depends on the training algorithms and the selection of input points. Papers have been published that explain CMAC algorithms but little work has been done to improve existing algorithms. In this paper, the existing algorithms are first explained and then compared using computational results and the algorithm properties. Improvements are made to the reco...

Journal: :Neural Computation 1992
Léon Bottou Vladimir Vapnik

Very rarely are training data evenly distributed in the input space. Local learning algorithms attempt to locally adjust the capacity of the training system to the properties of the training set in each area of the input space. The family of local learning algorithms contains known methods, like the k-Nearest Neighbors method (kNN) or the Radial Basis Function networks (RBF), as well as new alg...

2005
Sunita Sarawagi

Segmentation of sequences is an important modeling primitive with several applications. Training and inference of segmentation models involves dynamic programming computations that in the worst case can be cubic in the length of a sequence. In contrast, typical sequence labeling models require linear time. We propose an alternative graphical model for efficient sharing of potentials across over...

2008

Whereas theoretical work suggests that deep architectures might be more efficient at representing highly-varying functions, training deep architectures was unsuccessful until the recent advent of algorithms based on unsupervised pretraining. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this difficult learning problem. Answering th...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید