نتایج جستجو برای: speech acoustics
تعداد نتایج: 125685 فیلتر نتایج به سال:
Speech production is dependent on both auditory and somatosensory feedback. Although audition may appear to be the dominant sensory modality in speech production, somatosensory information plays a role that extends from brainstem responses to cortical control. Accordingly, the motor commands that underlie speech movements may have somatosensory as well as auditory goals. Here we provide evidenc...
We investigated how the properties of Infant Directed Speech (IDS) and Adult Directed Speech (ADS) differed in acoustics and in speech-related articulation. Both the degree to which auditory and motion properties changed as a function of speech style (IDS vs. ADS) as well as how the correlation between properties was affected by this change were examined. The acoustic properties of 13 sentences...
Speech production errors characteristic of dysarthria are chiefly responsible for the low accuracy of automatic speech recognition (ASR) when used by people diagnosed with it. The results of the small number of speech recognition studies, mostly conducted by assistive technology researchers, are a testimony to this statement. In the engineering community, substantial research has been conducted...
S ARE IN THE ORDER OF PRESENTATION Alphabetical Index of authors begins on page 86 Fifth International Workshop on Detection, Classification, Localization, and Density Estimation of Marine Mammals using Passive Acoustics 21 -25 August, 2011 Timberline Lodge, Mount Hood, Oregon, USA
Speech can be represented as a constellation of constricting vocal tract actions called gestures, whose temporal patterning with respect to one another is expressed in a gestural score. Current speech datasets do not come with gestural annotation and no formal gestural annotation procedure exists at present. This paper describes an iterative analysis-by-synthesis landmark-based time-warping arc...
Motivated by the speaker-specificity and stationarity of subglottal acoustics, this paper investigates the utility of subglottal cepstral coefficients (SGCCs) for speaker identification (SID) and verification (SV). SGCCs can be computed using accelerometer recordings of subglottal acoustics, but such an approach is infeasible in real-world scenarios. To estimate SGCCs from speech signals, we ad...
An automatic speech recognition approach is presented which uses articulatory features estimated by a subject-independent acoustic-to-articulatory inversion. The inversion allows estimation of articulatory features from any talker's speech acoustics using only an exemplary subject's articulatory-to-acoustic map. Results are reported on a broad class phonetic classification experiment on speech ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید