نتایج جستجو برای: speech sign

تعداد نتایج: 171515  

پایان نامه :وزارت علوم، تحقیقات و فناوری - دانشگاه تبریز 1381

‏‎the hypothesis is that recent and frequent exposure to lexical items leads to a more fluent production of speech in terms of rate of speech. to test the hypothesis , a one- way anova experimental design was carried out. 24 senior student of efl participated in a one-way interview test. data analyses revealed that those who were exposed frequently to the lexical items over a week prior to inte...

2017
Sivalogeswaran Ratnasingam T. M. McGinnity Shuying Jiang Yonas Fantahun Admasu Kumudha Raimond Weihua Sheng Anjali Kalra Sarbjeet Singh Kostas Karpouzis Athanasios Drosopoulos Stefanos Kollias Jagdish Lal Raheja Radhey shyam

Gesture and Speech based human Computer interaction is attractive attention across various areas such as pattern recognition, computer vision. Thus kind of research areas find many kind of application in Multimodal HCI, Robotics control, Sign language recognition. This paper presents head and hand Gesture as well as Speech recognition system for human computer interaction (HCI).This kind of vis...

Journal: :Chest 1987
P K Monoson A W Fox

Clinical observation suggested that speech disorder seemed to be associated with sleep apnea. We recorded a standard speech sample from 39 matched subjects in three groups, 13 sleep apnea individuals, 13 subjects with COPD, and 13 subjects without sleep apnea or COPD. Three speech pathologists in a single blind listening task of the recorded samples judged whether or not speech disorder was pre...

2017
Krzysztof Wolk Agnieszka Wolk Wojciech Glinkowski

People with speech, hearing, or mental impairment require special communication assistance, especially for medical purposes. Automatic solutions for speech recognition and voice synthesis from text are poor fits for communication in the medical domain because they are dependent on error-prone statistical models. Systems dependent on manual text input are insufficient. Recently introduced system...

2007
Hartmut Traunmüller

According to the Motor Theory of Speech Perception (MTSP), listeners perceive speech by way of the articulatory gestures they would perform themselves in producing a similar signal. The theory postulates a module that allows extracting gestural information from the signal. The gestures constitute the event perceived. According to the Modulation Theory (MDT), speech is modulated voice. Listeners...

Journal: :Brain and language 2007
Roel M Willems Peter Hagoort

Co-speech gestures embody a form of manual action that is tightly coupled to the language system. As such, the co-occurrence of speech and co-speech gestures is an excellent example of the interplay between language and action. There are, however, other ways in which language and action can be thought of as closely related. In this paper we will give an overview of studies in cognitive neurosci...

2009
Robert Morris Ralph Johnson Vladimir Goncharoff Joseph DiVita

This paper presents an improved method for asynchronous embedding and recovery of sub-audible watermarks in speech signals. The watermark, a sequence of DTMF tones, was added to speech without knowledge of its time-varying characteristics. Watermark recovery began by implementing a synchronized zero-phase inverse filtering operation to decorrelate the speech during its voiced segments. The fina...

Journal: :Journal of deaf studies and deaf education 2008
Ruth Campbell Mairéad MacSweeney Dafydd Waters

How are signed languages processed by the brain? This review briefly outlines some basic principles of brain structure and function and the methodological principles and techniques that have been used to investigate this question. We then summarize a number of different studies exploring brain activity associated with sign language processing especially as compared to speech processing. We focu...

Journal: :Developmental science 2010
Seyda Ozçalişkan Susan Goldin-Meadow

Children differ in how quickly they reach linguistic milestones. Boys typically produce their first multi-word sentences later than girls do. We ask here whether there are sex differences in children's gestures that precede, and presage, these sex differences in speech. To explore this question, we observed 22 girls and 18 boys every 4 months as they progressed from one-word speech to multi-wor...

2011
Fernando J. López-Colino Javier Tejedor Jordi Porta José Colás Pasamontes

This paper presents the first results of the integration of a Spanish-toLSE Machine Translation (MT) system into an e-learning platform. Most elearning platforms provide speech-based contents, which makes them inaccessible to the Deaf. To solve this issue, we have developed a MT system that translates Spanish speech-based contents into LSE. To test our MT system, we have integrated it into an e...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید