نتایج جستجو برای: hidden training
تعداد نتایج: 378572 فیلتر نتایج به سال:
A novel two-channel algorithm is proposed in this paper for discriminative training of Hidden Markov Models (HMMs). It adjusts the symbol emission coefficients of an existing HMM to maximize the separable distance between a pair of confusable training samples. The method is applied to identify the visemes of visual speech. The results indicate that the two-channel training method provides bette...
In this work a method for splitting continuous mixture density hidden Markov models (HMM) is presented. The approach combines a model evaluation measure based on the Maximum Mutual Information (MMI) criterion with subsequent standard Maximum Likelihood (ML) training of the HMM parameters. Experiments were performed on the SieTill corpus for telephone line recorded German continuous digit string...
A hidden Markov model for labeled observations, called a CHMM, is introduced and a maximum likelihood method is developed for estimating the parameters of the model. Instead of training it to model the statistics o f the training sequences it is trained to optimize recognition, It resembles MMI training, but is more general, and has MMI as a special case. The standard forwardbackward procedure ...
Scaling properties of neural networks, that are the relations between the number of hidden units and the training or generalization error, recently have been investigated theoretically with encouraging results. In our paper we investigate experimentally, whether the theoretic results may be expected in practical applications. We investigate different neural network structures with varying numbe...
We propose an algorithm for training Multi Layer Preceptrons for classification problems, that we named Hidden Layer Learning Vector Quantization (H-LVQ). It consists of applying Learning Vector Quantization to the last hidden layer of a MLP and it gave very successful results on problems containing a large number of correlated inputs. It was applied with excellent results on classification of ...
◮ find P̂(N j → ζ) = C(N j→ζ) ∑ γ C(N j→γ) ◮ C(X) = count of how often rule X is used ◮ no annotation ⇒ no rule counts! =̂ hidden data problem – similar to Hidden Markov Models ◮ start with some initial rule probabilities, parse training sentences, use parse probabilities as indicator of confidence ◮ find expectation of how often a rule is used ◮ based on these expectations, maximize probabilities:
We study neural network models that learn location invariant orthographic representations for printed words. We compare two model architectures: with and without a hidden layer. We find that both architectures succeed in learning the training data and in capturing benchmark phenomena of skilled reading – transposed-letter and relative-position priming. Networks without a hidden layer use a stra...
Abstract: The performances of Normalised RBF (NRBF) nets and standard RBF nets are compared in simple classification and mapping problems. In Normalized RBF networks, the traditional roles of weights and activities in the hidden layer are switched. Hidden nodes perform a function similar to a Voronoi tessellation of the input space, and the output weights become the network's output over the pa...
Geoffrey E. Hinton Gatsby Unit, UCL London, UK WCIN 3AR [email protected] Logistic units in the first hidden layer of a feedforward neural network compute the relative probability of a data point under two Gaussians. This leads us to consider substituting other density models. We present an architecture for performing discriminative learning of Hidden Markov Models using a network of many...
In this paper a novel approach to ECG signal classification is proposed. The approach is based on using hidden conditional random fields (HCRF) to model the ECG signal. Features used in training and testing the HCRF are based on time-frequency analysis of the ECG waveforms. Experimental results show that the HCRF model is promising and gives higher accuracy compared to maximum-likelihood (ML) t...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید