نتایج جستجو برای: compressed speech

تعداد نتایج: 141827  

Journal: :Speech Communication 2012
Pandurangarao N. Kulkarni Prem C. Pandey Dakshayani S. Jangamashetti

In multi-band frequency compression, the speech spectrum is divided into a number of analysis bands, and the spectral samples in each band are compressed towards the band center by a constant compression factor, resulting in presentation of the speech energy in relatively narrow bands, for reducing the effect of increased intraspeech spectral masking associated with sensorineural hearing loss. ...

2015
Azzedine Touazi Mohamed Debyeche

Voice Activity Detection (VAD) algorithms based on machine learning techniques have shown competitive results in the area of automatic speech recognition. This paper describes a new approach of VAD based on Support Vector Machines (SVM) for Distributed Speech Recognition (DSR) system. In the proposed scheme, the speech and the non-speech frames are detected from the compressed Mel Frequency Cep...

2014
Ying-Hui Lai Fei Chen Yu Tsao

Hearing-impaired patients have limited hearing dynamic range for speech perception, which partially accounts for their poor speech understanding abilities, particularly in noise. Wide dynamic range compression aims to compress speech signal into the usable hearing dynamic range of hearing-impaired listeners; however, it normally uses a static compression based strategy. This work proposed a str...

Journal: :Speech Communication 2006
Naveen Srinivasamurthy Antonio Ortega Shrikanth S. Narayanan

In this paper the remote speech recognition problem is addressed. Speech features are extracted at a client and transmitted to a remote recognizer. This enables a low complexity client, which does not have the computational and memory resources to host a complex speech recognizer, to make use of distributed resources to provide speech recognition services to the user. The novelties of the propo...

2017
Shinnosuke Takamichi Tomoki Koriyama Hiroshi Saruwatari

This paper presents sampling-based speech parameter generation using moment-matching networks for Deep Neural Network (DNN)-based speech synthesis. Although people never produce exactly the same speech even if we try to express the same linguistic and para-linguistic information, typical statistical speech synthesis produces completely the same speech, i.e., there is no inter-utterance variatio...

پایان نامه :وزارت علوم، تحقیقات و فناوری - دانشگاه علامه طباطبایی - دانشکده ادبیات و زبانهای خارجی 1389

within the components of communicative competence, a special emphasis is put on the “rules of politeness,” specifically the politeness strategies (brown and levinson, 1978) that speakers deploy when performing the request speech act. this is because the degree of imposition that making a request places upon one’s interlocutor(s) has been seen to be influenced by several factors among which, as ...

Journal: :Anglìstika ta amerikanìstika 2022

The article deals with the investigation of grammatical means achieving expressiveness in compressed texts. In addition, peculiarities functioning these are considered article. Researches linguistic expressive closely related to stylistics, thus, texts are, first all, represented by stylistic devices as discourse is actualized through stylistics. Grammatical level language traditionally subdivi...

Journal: :Journal of the American Academy of Audiology 2006
Nancy Vaughan Daniel Storzbach Izumi Furukawa

The goal of this study was to identify specific neurocognitive deficits that are associated with older listeners' difficulty understanding rapid speech. Older listeners performed speech recognition tests comprised of time-compressed sentences with and without context, and on a neurocognitive battery aimed specifically at testing working memory, processing speed, and attention. A principle compo...

Journal: :EURASIP J. Adv. Sig. Proc. 2005
Michael Büchler Silvia Allegro Stefan Launer Norbert Dillier

A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes “clean speech,” “speech in noise,” “noise,” and “music.” A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmo...

2005
Wolfgang Hürst Tobias Lauer Cédric Bürfent Georg Götz

In pursuit of the goal to make recorded speech as easy to skim as printed text, a variety of methods and user interfaces have been suggested in the literature, involving time-compressed audio, speech segmentation and recognition, etc. We propose a new user interface, the elastic audio slider, which makes navigation in speech documents similar to video navigation or text scrolling. The approach ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید