نتایج جستجو برای: speech acoustics

تعداد نتایج: 125685  

Journal: :The Journal of the Acoustical Society of America 1971

Journal: :Current Biology 2006
Sazzad M. Nasir David J. Ostry

Speech production is dependent on both auditory and somatosensory feedback. Although audition may appear to be the dominant sensory modality in speech production, somatosensory information plays a role that extends from brainstem responses to cortical control. Accordingly, the motor commands that underlie speech movements may have somatosensory as well as auditory goals. Here we provide evidenc...

Journal: :Journal of Speech, Language, and Hearing Research 2012

Journal: :The Journal of the Acoustical Society of America 1990

2012
Jeesun Kim Chris Davis Christine Kitamura

We investigated how the properties of Infant Directed Speech (IDS) and Adult Directed Speech (ADS) differed in acoustics and in speech-related articulation. Both the degree to which auditory and motion properties changed as a function of speech style (IDS vs. ADS) as well as how the correlation between properties was affected by this change were examined. The acoustic properties of 13 sentences...

2012
Harsh Vardhan Sharma VARDHAN SHARMA Kumkum Sharma Krishna Kant Sharma

Speech production errors characteristic of dysarthria are chiefly responsible for the low accuracy of automatic speech recognition (ASR) when used by people diagnosed with it. The results of the small number of speech recognition studies, mostly conducted by assistive technology researchers, are a testimony to this statement. In the engineering community, substantial research has been conducted...

2011
Yvan Simard Cédric Gervaise Nathalie Roy

S ARE IN THE ORDER OF PRESENTATION Alphabetical Index of authors begins on page 86 Fifth International Workshop on Detection, Classification, Localization, and Density Estimation of Marine Mammals using Passive Acoustics 21 -25 August, 2011 Timberline Lodge, Mount Hood, Oregon, USA

Journal: :The Journal of the Acoustical Society of America 2012
Hosung Nam Vikramjit Mitra Mark Tiede Mark Hasegawa-Johnson Carol Espy-Wilson Elliot Saltzman Louis Goldstein

Speech can be represented as a constellation of constricting vocal tract actions called gestures, whose temporal patterning with respect to one another is expressed in a gestural score. Current speech datasets do not come with gestural annotation and no formal gestural annotation procedure exists at present. This paper describes an iterative analysis-by-synthesis landmark-based time-warping arc...

2014
Harish Arsikere Hitesh Anand Gupta Abeer Alwan

Motivated by the speaker-specificity and stationarity of subglottal acoustics, this paper investigates the utility of subglottal cepstral coefficients (SGCCs) for speaker identification (SID) and verification (SV). SGCCs can be computed using accelerometer recordings of subglottal acoustics, but such an approach is infeasible in real-world scenarios. To estimate SGCCs from speech signals, we ad...

Journal: :The Journal of the Acoustical Society of America 2011
Prasanta Kumar Ghosh Shrikanth Narayanan

An automatic speech recognition approach is presented which uses articulatory features estimated by a subject-independent acoustic-to-articulatory inversion. The inversion allows estimation of articulatory features from any talker's speech acoustics using only an exemplary subject's articulatory-to-acoustic map. Results are reported on a broad class phonetic classification experiment on speech ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید