نتایج جستجو برای: hidden training

تعداد نتایج: 378572  

2003
Say Wei Foo Yong Lian Liang Dong

A novel two-channel algorithm is proposed in this paper for discriminative training of Hidden Markov Models (HMMs). It adjusts the symbol emission coefficients of an existing HMM to maximize the separable distance between a pair of confusable training samples. The method is applied to identify the visemes of visual speech. The results indicate that the two-channel training method provides bette...

1999
Ralf Schlüter Wolfgang Macherey Boris Müller Hermann Ney

In this work a method for splitting continuous mixture density hidden Markov models (HMM) is presented. The approach combines a model evaluation measure based on the Maximum Mutual Information (MMI) criterion with subsequent standard Maximum Likelihood (ML) training of the HMM parameters. Experiments were performed on the SieTill corpus for telephone line recorded German continuous digit string...

2017
Anders Krogh

A hidden Markov model for labeled observations, called a CHMM, is introduced and a maximum likelihood method is developed for estimating the parameters of the model. Instead of training it to model the statistics o f the training sequences it is trained to optimize recognition, It resembles MMI training, but is more general, and has MMI as a special case. The standard forwardbackward procedure ...

1996
Axel Röbel

Scaling properties of neural networks, that are the relations between the number of hidden units and the training or generalization error, recently have been investigated theoretically with encouraging results. In our paper we investigate experimentally, whether the theoretic results may be expected in practical applications. We investigate different neural network structures with varying numbe...

Journal: :Neurocomputing 2003
Armando Vieira Nuno Barradas

We propose an algorithm for training Multi Layer Preceptrons for classification problems, that we named Hidden Layer Learning Vector Quantization (H-LVQ). It consists of applying Learning Vector Quantization to the last hidden layer of a MLP and it gave very successful results on problems containing a large number of correlated inputs. It was applied with excellent results on classification of ...

2009

◮ find P̂(N j → ζ) = C(N j→ζ) ∑ γ C(N j→γ) ◮ C(X) = count of how often rule X is used ◮ no annotation ⇒ no rule counts! =̂ hidden data problem – similar to Hidden Markov Models ◮ start with some initial rule probabilities, parse training sentences, use parse probabilities as indicator of confidence ◮ find expectation of how often a rule is used ◮ based on these expectations, maximize probabilities:

2010
Frédéric Dandurand Thomas Hannagan

We study neural network models that learn location invariant orthographic representations for printed words. We compare two model architectures: with and without a hidden layer. We find that both architectures succeed in learning the training data and in capturing benchmark phenomena of skilled reading – transposed-letter and relative-position priming. Networks without a hidden layer use a stra...

Journal: :Neurocomputing 1998
Guido Bugmann

Abstract: The performances of Normalised RBF (NRBF) nets and standard RBF nets are compared in simple classification and mapping problems. In Normalized RBF networks, the traditional roles of weights and activities in the hidden layer are switched. Hidden nodes perform a function similar to a Voronoi tessellation of the input space, and the output weights become the network's output over the pa...

2001
Andrew D. Brown Geoffrey E. Hinton

Geoffrey E. Hinton Gatsby Unit, UCL London, UK WCIN 3AR [email protected] Logistic units in the first hidden layer of a feedforward neural network compute the relative probability of a data point under two Gaussians. This leads us to consider substituting other density models. We present an architecture for performing discriminative learning of Hidden Markov Models using a network of many...

2008
Reda A. El-Khoribi

In this paper a novel approach to ECG signal classification is proposed. The approach is based on using hidden conditional random fields (HCRF) to model the ECG signal. Features used in training and testing the HCRF are based on time-frequency analysis of the ECG waveforms. Experimental results show that the HCRF model is promising and gives higher accuracy compared to maximum-likelihood (ML) t...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید