نتایج جستجو برای: hidden training
تعداد نتایج: 378572 فیلتر نتایج به سال:
We have applied Bayesian regularisation methods to multi-layer perceptron (MLP) training in the context of a hybrid MLP– HMM (hidden Markov model) continuous speech recognition system. The Bayesian framework adopted here allows an objective setting of the regularisation parameters, according to the training data. Experiments were carried out on the ARPA Resource Management database.
In this chapter we provide a supervised training paradigm for hidden Markov models (HMMs). Unlike popular ad-hoc approaches, our paradigm is completely general, need not make any simplifying assumptions about independence, and can take better advantage of the information contained in the training corpus.
In layered neural networks, the input space is reconstructed on the hidden layer through the connection weights from the input layer to the hidden layer and the output function of each hidden neuron. The connection weights are modi ed by learning and realize the transformation to emphasize necessary information and to degenerate unnecessary one for calculating the output. In this paper, visual ...
Deep networks have achieved impressive results across a variety of important tasks. However a known weakness is a failure to perform well when evaluated on data which differ from the training distribution, even if these differences are very small, as is the case with adversarial examples. We propose Fortified Networks, a simple transformation of existing networks, which fortifies the hidden lay...
CONTEXT Religion and spirituality play an important role in physicians' medical practice, but little research has examined their influence within the socialization of medical trainees and the hidden curriculum. OBJECTIVES The objective is to explore the role of religion and spirituality as they intersect with aspects of medicine's hidden curriculum. METHODS Semiscripted, one-on-one intervie...
Discriminative training of hidden Markov models (HMMs) using segmental minimum classi cation error (MCE) training has been shown to work extremely well for certain speech recognition applications. It is, however, somewhat prone to overspecialization. This study investigates various techniques which improve performance and generalization of the MCE algorithm. Improvements of up to 7% in relative...
Discriminative training techniques for Hidden Markov Models were recently proposed and successfully applied for automatic speech recognition In this paper a discussion of the Minimum Classi cation Error and the Maximum Mu tual Information objective is presented An extended reesti mation formula is used for the HMM parameter update for both objective functions The discriminative training me thod...
Hidden semi-Markov models (HSMMs) are latent variable models which allow latent state persistence and can be viewed as a generalization of the popular hidden Markov models (HMMs). In this paper, we introduce a novel spectral algorithm to perform inference in HSMMs. Unlike expectation maximization (EM), our approach correctly estimates the probability of given observation sequence based on a set...
We propose a two-stage training for the multilayer perceptron (MLP). The first stage is bottom-up, where we use a class separability measure to conduct hidden layer training and the least squared error criterion to train the output layer. The second stage is top-down, we use a criterion derived from classification error rate to further train the network weights. We demonstrate the effectiveness...
Inconsistency between training and testing criteria is a drawback of the hybrid arti cial neural network and hidden Markov model (ANN/HMM) approach to speech recognition. This paper presents an e ective method to address this problem by modifying the feedforward neural network training paradigm. Word errors are explicitly incorporated in the training procedure to achieve improved word recogniti...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید