نتایج جستجو برای: lip reading

تعداد نتایج: 130722  

Journal: :Proceedings of the AAAI Conference on Artificial Intelligence 2020

Journal: :The Journal of the Acoustical Society of America 2016
Jacques A Grange John F Culling

Cochlear implant (CI) users suffer from elevated speech-reception thresholds and may rely on lip reading. Traditional measures of spatial release from masking quantify speech-reception-threshold improvement with azimuthal separation of target speaker and interferers and with the listener facing the target speaker. Substantial benefits of orienting the head away from the target speaker were pred...

2010
Jacob L. Newman Barry-John Theobald Stephen J. Cox

In this paper we investigate the limits of automated lip-reading systems and we consider the improvement that could be gained were additional information from other (non-visible) speech articulators available to the recogniser. Hidden Markov model (HMM) speech recognisers are trained using electromagnetic articulography (EMA) data drawn from the MOCHA-TIMIT data set. Articulatory information is...

2017
Ahsan Adeel Mandar Gogate Amir Hussain

Speech enhancement aims to enhance the perceived speech quality and intelligibility in the presence of noise. Classical speech enhancement methods are mainly based on audio only processing which often perform poorly in adverse conditions, where overwhelming noise is present. This paper presents an interactive prototype demo, as part of a disruptive cognitivelyinspired multimodal hearing-aid bei...

2013
Dominic Howell Barry-John Theobald Stephen J. Cox

Automated lip-reading involves recognising speech from only the visual signal. The accuracy of current state-ofthe-art lip-reading systems is significantly lower than that obtained by acoustic speech recognisers. These poor results are most likely due to the lack of information about speech production that is available in the visual signal: for example, it is impossible to discriminate voiced a...

Journal: :Cerebral cortex 2005
Norihiro Sadato Tomohisa Okada Manabu Honda Ken-Ichi Matsuki Masaki Yoshida Ken-Ichi Kashikura Wataru Takei Tetsuhiro Sato Takanori Kochiyama Yoshiharu Yonekura

Sign language activates the auditory cortex of deaf subjects, which is evidence of cross-modal plasticity. Lip-reading (visual phonetics), which involves audio-visual integration, activates the auditory cortex of hearing subjects. To test whether audio-visual cross-modal plasticity occurs within areas involved in cross-modal integration, we used functional MRI to study seven prelingual deaf sig...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید