نتایج جستجو برای: speech acoustics

تعداد نتایج: 125685  

1992
Makoto Hirayama Eric Vatikiotis-Bateson Kiyoshi Honda Yasuharu Koike Mitsuo Kawato

This study demonstrates a paradigm for modeling speech production based on neural networks. Using physiological data from speech utterances, a neural network learns the forward dynamics relating motor commands to muscles and the ensuing articulator behavior that allows articulator trajectories to be generated from motor commands constrained by phoneme input strings and global performance parame...

1998
David A. Nix John E. Hogden

We describe Maximum-Likelihood Continuity Mapping (MALCOM), an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete "hidden" space constrained by a fixed finite-automaton architecture, MALCOM has a continuous hidden space-a continuity map-that is constrained only by a smoothness requirement on paths through the space. MALCOM fits int...

 Background & Objectives: Noise is one of the most detrimental factors in working environments that alongside other physical problems have adverse effects on the mental health of employees. Open plan offices such as banks are under the influence of noise pollution sources, which can have a negative impact on health and comfort of employees. This study aimed to identify the sources of noise...

Journal: :journal of ai and data mining 2014
ali harimi ali shahzadi alireza ahmadyfard khashayar yaghmaie

speech emotion recognition (ser) is a new and challenging research area with a wide range of applications in man-machine interactions. the aim of a ser system is to recognize human emotion by analyzing the acoustics of speech sound. in this study, we propose spectral pattern features (sps) and harmonic energy features (hes) for emotion recognition. these features extracted from the spectrogram ...

2006
Xugang Lu Masashi Unoki Masato Akagi

This paper proposes a robust feature extraction method for automatic speech recognition (ASR) systems in reverberant environment. In this method, a sub-band power envelope inverse filtering algorithm based on the modulation transfer function (MTF), that we have previously proposed, is incorporated as a front-end processor for ASR. The impulse response of the room acoustics is assumed to be expo...

Journal: :Archives of Acoustics 2023

Numerous studies have shown that teachers often speak louder in classrooms because of the acoustic properties spaces. To improve acoustics classrooms, it is necessary to develop relevant criteria. Existing evaluation scales for parameters room been developed on basis adults a variety languages (e.g. Dutch and English). One issues still not fully recognized effect respondents’ language age resul...

2007

Fundamental speech research at the Department of Speech Communication & Music Acoustics, KTH, has lead to a multi-lingual text-to-speech system and a speech recognition device. Both are presently put to use by the visually impaired. To this date over five hundred text-to-speech systems are delivered, most of them to applications for the visually impaired. Some of these applications will be desc...

2012
Takayuki Arai Kanae Amino Mee Sonu Keiichi Yasu Takako Igeta Kanako Tomaru Marino Kasuya

In previous studies, we developed several physical models of the human vocal tract, reporting that they are intuitive and helpful for students studying acoustics and speech science. Furthermore, we designed a sliding vocal-tract handicraft model at a science workshop, enabling children to make their own vocal-tract model with a sound source. Additionally, at various science museums, we supervis...

2017
ANTE JUKIĆ

Blind multichannel speech dereverberation methods based on multichannel linear prediction (MCLP) estimate the dereverberated speech component without any knowledge of the room acoustics by estimating and subtracting the undesired reverberant component from the reference microphone signal. In this paper we present a general framework for incorporating sparsity in the time-frequency domain into M...

2007
Melissa A. Redford

The development of clear speech was examined in a cross-sectional study of preschool children aged 3, 4, and 5 years old. Thirty children produced target monosyllabic words with monophthongal vowels in clear and casual speech conditions. Vowel acoustics were measured and adults were asked to provide clear speech ratings on either the vowel or the whole word. The results provided little evidence...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید