نتایج جستجو برای: cdhmms

تعداد نتایج: 34  

1998
Brian Kan-Wing Mak Enrico Bocchieri

Training of continuous density hidden Markov models (CDHMMs) is usually time-consuming and tedious due to the large number of model parameters involved. Recently we proposed a new derivative of CDHMM, the subspace distribution clustering hidden Markov model (SDCHMM) which tie CDHMMs at the ner level of subspace distributions, resulting in many fewer model parameters. An SDCHMM training algorith...

2007
Brian Mak Enrico Bocchieri

Training of continuous density hidden Markov models (CDHMMs) is usually time-consuming and tedious due to the large number of model parameters involved. Recently we proposed a new derivative of CDHMM, the sub-space distribution clustering hidden Markov model (SD-CHMM) which tie CDHMMs at the ner level of subspace distributions, resulting in many fewer model parameters. An SDCHMM training algori...

Journal: :Computer Speech & Language 1999
Stavros Tsakalidis Vassilios Digalakis Leonardo Neumeyer

This paper introduces a new form of observation distributions for hidden Markov models (HMMs), combining subvector quantization and mixtures of discrete distributions. We present efficient training and decoding algorithms for the discretemixture HMMs (DMHMMs). Our experimental results in the airtravel information domain show that the high-level of recognition accuracy of continuous mixture-dens...

1995
Kin Yu John S. D. Mason John Oglesby

This paper evaluates continuous density hidden Markov models (CDHMM), dynamic time warping (DTW) and distortion-based vector quantisa-tion (VQ) for speaker recognition, across incremen-tal amounts of training data. In comparing VQ and CDHMMs for text-independent (TI) speaker recognition , it is shown that VQ performs better than an equivalent CDHMM with one training version, but is outperformed...

1995
Kin Yu John Mason John Oglesby

1 Illustration of the segmentation of the database collected over a period of three months into training and 3 %Error against total number of mixtures for TI ergodic CDHMMs (10 version training) 7 %Error against the number of training versions for a TI 32 element VQ, and 32 mixture single state CDHMM 11 8 %Error against the number of training versions for TD DTW, 8 element VQ and 1 mixture 8 st...

Journal: :IEEE Trans. Speech and Audio Processing 2001
Enrico Bocchieri Brian Kan-Wing Mak

Most contemporary laboratory recognizers require too much memory to run, and are too slow for mass applications. One major cause of the problem is the large parameter space of their acoustic models. In this paper, we propose a new acoustic modeling methodology which we call subspace distribution clustering hidden Markov modeling (SDCHMM) with the aim at achieving much more compact acoustic mode...

Journal: :IEEE Trans. Speech and Audio Processing 2001
Brian Kan-Wing Mak Enrico Bocchieri

It generally takes a long time and requires a large amount of speech data to train hidden Markov models for a speech recognition task of a reasonably large vocabulary. Recently, we proposed a compact acoustic model called “subspace distribution clustering hidden Markov model” (SDCHMM) with an aim to save some of the training effort. SDCHMMs are derived from tying continuous density hidden Marko...

1997
Brian Mak Enrico Bocchieri Etienne Barnard

In [1], our novel subspace distribution clustering hidden Markov model (SDCHMM) made its debut as an approximation to continuous density HMM (CDHMM). Deriving SDCHMMs from CDHMMs requires a definition of multiple streams and a Gaussian clustering scheme. Previously we have tried 4 and 13 streams, which are common but ad hoc choices. Here we present a simple and coherent definition for streams o...

2007
Yan Yin Hui Jiang

In this paper, we present a new fast optimization method to solve large margin estimation (LME) of continuous density hiddenMarkov models (CDHMMs) for speech recognition based on second order cone programming (SOCP). SOCP is a class of nonlinear convex optimization problems which can be solved quite efficiently. In this work, we have proposed a new convex relaxation condition under which LME of...

1998
Brian Kan-Wing Mak Enrico Bocchieri

In [2] and [7], we presented our novel subspace distribution clustering hiddenMarkovmodels (SDCHMMs)which can be converted from continuous density hidden Markov models (CDHMMs) by clustering subspaceGaussians in each stream over all models. Though such model conversion is simple and runs fast, it has two drawbacks: (1) it does not take advantage of the fewer model parameters in SDCHMMs — theore...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید