نتایج جستجو برای: speech sign

تعداد نتایج: 171515  

2002
Stephen Cox

A system that provides translation from speech to sign language (TESSA) is described. TESSA has been developed to assist a Post Office clerk (who has no knowledge of sign language) in making a transaction with a customer using sign language. The system uses a set of about 370 pre-defined phrases which have been prestored and can be signed by a specially-developed avatar. The clerk is unable to ...

2010
Jan Trmal Marek Hrúz

In the paper we give a brief introduction into sign language recognition and present a particular research task, where the access to MetaCentrum computing facilities was highly beneficial. Although the problem of signed speech recognition is currently being researched into by many research institutions all around the world, it lacks of a generally accepted baseline parametrization method. Our t...

2012
Hugh Rabagliati Ann Senghas Scott Johnson Gary F. Marcus

Infants appear to learn abstract rule-like regularities (e.g., la la da follows an AAB pattern) more easily from speech than from a variety of other auditory and visual stimuli (Marcus et al., 2007). We test if that facilitation reflects a specialization to learn from speech alone, or from modality-independent communicative stimuli more generally, by measuring 7.5-month-old infants' ability to ...

Journal: :Expert Syst. Appl. 2013
Verónica López-Ludeña Rubén San-Segundo-Hernández Carlos González-Morcillo Juan Carlos López José Manuel Pardo

This paper describes a new version of a speech into sign language translation system with new tools and characteristics for increasing its adaptability to a new task or a new semantic domain. This system is made up of a speech recognizer (for decoding the spoken utterance into a word sequence), a natural language translator (for converting a word sequence into a sequence of signs belonging to t...

2008
Philippe Dreuw Daniel Stein Thomas Deselaers David Rybach Morteza Zahedi Jan Bungeroth Hermann Ney

We present an approach to automatically recognize sign language and translate it into a spoken language. A system to address these tasks is created based on state-ofthe-art techniques from statistical machine translation, speech recognition, and image processing research. Such a system is necessary for communication between deaf and hearing people. The communication is otherwise nearly impossib...

2011
Jean F. Andrews Vickie Dionne

Alice, a deaf girl who was implanted after age three years of age was exposed to four weeks of storybook sessions conducted in American Sign Language (ASL) and speech (English). Two research questions were address: (1) how did she use her sign bimodal/bilingualism, codeswitching, and code mixing during reading activities and (2) what sign bilingual code-switching and code-mixing strategies did ...

2012
Philippe Dreuw

This PhD thesis investigates the image sequence labeling problems optical character recognition (OCR), object tracking, and automatic sign language recognition (ASLR). To address these problems we investigate which concepts and ideas can be adopted from speech recognition to these problems. For each of these tasks we propose an approach that is centered around the approaches known from speech r...

Journal: :Journal of speech, language, and hearing research : JSLHR 2017
Lawrence D Shriberg Edythe A Strand Marios Fourakis Kathy J Jakielski Sheryl D Hall Heather B Karlsson Heather L Mabie Jane L McSweeny Christie M Tilkens David L Wilson

Purpose The goal of this article is to introduce the pause marker (PM), a single-sign diagnostic marker proposed to discriminate early or persistent childhood apraxia of speech (CAS) from speech delay.

Journal: :Journal of deaf studies and deaf education 2014
Marcel R Giezen Anne E Baker Paola Escudero

The effect of using signed communication on the spoken language development of deaf children with a cochlear implant (CI) is much debated. We report on two studies that investigated relationships between spoken word and sign processing in children with a CI who are exposed to signs in addition to spoken language. Study 1 assessed rapid word and sign learning in 13 children with a CI and found t...

Journal: :Journal of memory and language 2009
Karen Emmorey Rain Bosworth Tanya Kraljic

The perceptual loop theory of self-monitoring posits that auditory speech output is parsed by the comprehension system. For sign language, however, visual input from one's own signing is distinct from visual input received from another's signing. Two experiments investigated the role of visual feedback in the production of American Sign Language (ASL). Experiment 1 revealed that signers were po...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید