نتایج جستجو برای: audio visual distraction
تعداد نتایج: 424018 فیلتر نتایج به سال:
A novel model is presented to learn bimodally informative structures from audio-visual signals. The signal is represented as a sparse sum of audio-visual kernels. Each kernel is a bimodal function consisting of synchronous snippets of an audio waveform and a spatio-temporal visual basis function. To represent an audio-visual signal, the kernels can be positioned independently and arbitrarily in...
Technology-driven interactions are becoming commonplace, particularly as online classes, telecommuting, and virtual meetings across distances time zones have all increased in popularity. Platforms such Google Meet, Skype, Webex, Zoom use synchronous audio-visual communication supported by text-based chat, emoticon responses, other supplementary functions. Given this uptick the of video conferen...
A study on audio, visual, and audio-visual egocentric distance perception by moving participants in virtual environments is presented. Audio-visual rendering is provided using tracked passive visual stereoscopy and acoustic wave eld synthesis (WFS). Distances are estimated using indirect blind-walking (triangulation) under each rendering condition. Experimental results show that distances perce...
The risk of drivers engaging in distracting activies is increasing as in-vehicle technology and carried-in devices become increasingly common and complicated. Consequently, distraction and inattention contribute to crash risk and are likely to have an increasing influence on driving safety. Analysis of police-reported crash data from 2008 showed that distractions contributed to an estimated 5,8...
In this paper we investigate bene ts of classi er combination fusion for a multimodal system for personal identity veri cation The system uses frontal face images and speech We show that a sophisticated fusion strategy enables the system to outperform its facial and vocal modules when taken seperately We show that both trained linear weighted schemes and fusion by Support Vector Machine classi ...
In this work, a system of audio visual speech recognition will be presented. A new hybrid visual feature combination, which is suitable for audio -visual speech recognition was implemented. The features comprise both the shape and the appearance of lips, the dimensional reduction is applied using discrete cosine transform (DCT). A large visual speech database of the German language has been ass...
This work proposes a method to exploit both audio and visual speech information to extract a target speaker from a mixture of competing speakers. The work begins by taking an effective audio-only method of speaker separation, namely the soft mask method, and modifying its operation to allow visual speech information to improve the separation process. The audio input is taken from a single chann...
We address the problem of robust lip tracking, visual speech feature extraction, and sensor integration for audiovisual speech recognition applications. An appearance based model of the articulators, which represents linguistically important features, is learned from example images and is used to locate, track, and recover visual speech information. We tackle the problem of joint temporal model...
The transmission of the entire video and audio sequences over an internal or external network during the implementation of audio-visual recognition over internet protocol is inefficient especially when only selected data out of the entire video and audio sequences are actually used for the recognition process. Hence, in this paper, we propose an efficient method of implementing audio-visual rec...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید