Adjacent node continuous-state HMM's
نویسنده
چکیده
This paper explores properties of a family of Continuous state hidden Markov models (CSHMM’s) that are proposed for use in acoustic modeling. These models can be viewed as applying a smoothing to ordinary HMM’s in order to make estimates of transition and observation probabilities more robust by sharing data between adjacent state nodes. They may be trained by EM so that all parameters properly reflect the applied smoothing. The amount of smoothing may be trained as well, and the model reverts to the ordinary HMM in the limit as the smoothing parameters reach zero. Thus this technique may be employed selectively only in areas in the model where training data is sparse. This paper formulates EM-training for one variant of these models, and explores their performance when applied to constructing ergodic CSHMM models of speech and to a phoneme recognition task on the same data. The ergodic CSHMM did not improve performance over the HMM, but the phoneme CSHMM’s model the data with higher likelihood than the equivalent HMM’s, and have superior recognition accuracy.
منابع مشابه
Wavelet-based non-parametric HMM's: theory and applications
In this paper, we propose a new algorithm for non-parametric estimation of hidden Markov models (HMM's). The algorithm is based on a \wavelet-shrinkage" density estimator for the state-conditional probability density functions of the HMM's. It operates in an iterative fashion, similar to the EM re-estimation formulae used for maximum likelihood estimation of parametric HMM's. We apply the resul...
متن کاملLongest Path in Networks of Queues in the Steady-State
Due to the importance of longest path analysis in networks of queues, we develop an analytical method for computing the steady-state distribution function of longest path in acyclic networks of queues. We assume the network consists of a number of queuing systems and each one has either one or infinite servers. The distribution function of service time is assumed to be exponential or Erlang. Fu...
متن کاملModeling trajectories in the HMM framework
Most state-of-the-art statistical speech recognition systems use hidden Markov models (HMM) for modeling the speech signal. However, limited by the assumption of conditional independence of observations given the state sequence, current HMM's poorly model the trajectory constraints in speech. In [1], we introduced the parallel path HMM, where each phonetic unit is represented by a parallel coll...
متن کاملOn merging hidden Markov models with deformable templates
Hidden Markov modeling has proven extremely useful for statistical analysis of speech signals. There are, however, inherent problems in two dimensional extensions to HMM's, one of which is the exponential complexity associated with fully 2-D HMM's. In this paper, we propose a new 2-D HMM-like structure obtained by embedding states within regions of a deformable template structure. With this sta...
متن کاملRelative Density Nets: A New Way to Combine Backpropagation with HMM's
Geoffrey E. Hinton Gatsby Unit, UCL London, UK WCIN 3AR [email protected] Logistic units in the first hidden layer of a feedforward neural network compute the relative probability of a data point under two Gaussians. This leads us to consider substituting other density models. We present an architecture for performing discriminative learning of Hidden Markov Models using a network of many...
متن کامل