نتایج جستجو برای: verbal sound recognition

تعداد نتایج: 385073  

2016
Yong MA

Constructing a targeted wavelet neural network is an effective method to enhance network performances and recognition effect. By introducing heart sounds wavelet of neural network hidden layer as the activation function, heart sound targeted learning and recognition technology are integrated deeply, to obtain a new heart sounds wavelet neural network. Selecting normal heart sounds and premature...

Journal: :Human psychopharmacology 2010
Inge Zeeuws Natacha Deroost Eric Soetens

OBJECTIVE The improvement of long-term retention of verbal memory after an acute administration of D-amphetamine in recall and recognition tasks has been ascribed to an influence of the drug on memory consolidation. Because recent research has demonstrated that intermediate testing is of overriding importance for retention, we investigated whether D-amphetamine modulates the repeated testing ef...

2010
SHOTA YAMAMOTO YASUNARI YOSHITOMI MASAYOSHI TABUSE KOU KUSHIDA TARO ASADA

We propose a method for detecting a baby voice using a speech recognition system and fundamental frequency analysis. We propose the following two conditions for recognizing a sound form segment of a baby voice. Condition 1: The word reliability for a sound form segment obtained by using Julius is under a threshold, Condition 2: For a certain time period, the fundamental frequency of the sound f...

Journal: :Journal of the American Academy of Audiology 2005
Todd A Ricketts Benjamin W Y Hornsby

This brief report discusses the affect of digital noise reduction (DNR) processing on aided speech recognition and sound quality measures in 14 adults fitted with a commercial hearing aid. Measures of speech recognition and sound quality were obtained in two different speech-in-noise conditions (71 dBA speech, +6 dB SNR and 75 dBA speech, +1 dB SNR). The results revealed that the presence or ab...

2010
Cristina Ramponi Fionnuala C. Murphy Andrew J. Calder Philip J. Barnard

Depression has been associated with impaired recollection of episodic details in tests of recognition memory that use verbal material. In two experiments, the remember/know procedure was employed to investigate the effects of dysphoric mood on recognition memory for pictorial materials that may not be subject to the same processing limitations found for verbal materials in depression. In Experi...

Journal: :Seizure 1997
Christine Kilpatrick Vanessa Murrie Mark Cook David Andrewes Patricia Desmond John Hopper

The relationship between the degree and distribution of hippocampal atrophy measured by volumetric magnetic resonance imaging and severity of memory deficits in 25 patients with temporal lobe epilepsy secondary to mesial temporal sclerosis was assessed. Hippocampal volumes were expressed as a ratio of smaller to larger, normal ratio greater than 0.95. Neuropsychology tests included: subtests of...

Journal: :Journal of experimental psychology 1974
D W Massaro

The size of the sound stimulus employed in the first stage of speech processing was investigated in an attempt to determine the perceptual unit of analysis in speech recognition. It is assumed that the perceptual unit is held in a preperceptual auditory image until its sound pattern is complete and recognition has occurred. Vowels and consonant-vowel syllables were employed as test items in a r...

Journal: :journal of ai and data mining 2014
ali harimi ali shahzadi alireza ahmadyfard khashayar yaghmaie

speech emotion recognition (ser) is a new and challenging research area with a wide range of applications in man-machine interactions. the aim of a ser system is to recognize human emotion by analyzing the acoustics of speech sound. in this study, we propose spectral pattern features (sps) and harmonic energy features (hes) for emotion recognition. these features extracted from the spectrogram ...

2009
Jonas Beskow Giampiero Salvi Samer Al Moubayed

We give an overview of SynFace, a speech-driven face animation system originally developed for the needs of hard-of-hearing users of the telephone. For the 2009 LIPS challenge, SynFace includes not only articulatory motion but also non-verbal motion of gaze, eyebrows and head, triggered by detection of acoustic correlates of prominence and cues for interaction control. In perceptual evaluations...

2013
Mary Pietrowicz Karrie Karahalios

Sound has been an overlooked modality in visualization. Why? Because it is ephemeral. We experience it as it happens, often in community with others. Then, the sound is gone. Furthermore, sound in human communication is multidimensional and includes the semantic meaning of words, the meaning of expressive verbal gestures (paralingual and prosodic components), the nonvocal gestures, and relation...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید