نتایج جستجو برای: multimodal input

تعداد نتایج: 250965  

2007
Stephen A. Brewster Atte Kortekangas

This paper proposes the addition of non-speech sounds to aid people who use scanning as their method of input. Scanning input is a temporal task; users have to press a switch when a cursor is over the required target. However, it is usually presented as a spatial task with the items to be scanned laid-out in a grid. Research has shown that for temporal tasks the auditory modality is often bette...

2016
Angeliki Lazaridou Grzegorz Chrupala Raquel Fernández Marco Baroni

Children learn the meaning of words by being exposed to perceptually rich situations (linguistic discourse, visual scenes, etc). Current computational learning models typically simulate these rich situations through impoverished symbolic approximations. In this work, we present a distributed word learning model that operates on child-directed speech paired with realistic visual scenes. The mode...

2002
Thomas Strösslin Christophe Krebser Angelo Arleo Wulfram Gerstner

Motivation: Understand multisensory integration in spatial representations. • Hippocampal place cells form multimodal representation of space. • Cells in superior colliculus show multimodal enhancement. • Relevance of a mode depends on environmental conditions. • We propose a model on how to weigh modalities using a gating network. • Place code quality in varying conditions drives learning of g...

2002
Tommi Ilmonen Janne Kontkanen

Traditional ways to handle user input in software are uncomfortable when an application wishes to use novel input devices. This is especially the case in gesture based user interfaces. In this paper we describe these problems and as a solution we present an architecture and an implementation of a user input toolkit. We show that the higher level processing of user input such as gesture recognit...

2003
Stéphanie Buisine Jean-Claude Martin

In the field of intuitive HCI, Embodied Conversational Agents (ECAs) are being developed mostly with speech input. In this paper, we study whether another input modality leads to a more effective and pleasant “bi-directional” multimodal communication. In a Wizard-of-Oz experiment, adults and children were videotaped while interacting with 2D animated agents within a game application. Each subje...

پایان نامه :وزارت علوم، تحقیقات و فناوری - دانشگاه لرستان - پژوهشکده ریاضیات 1392

this is xetex, version 3.1415926-2.2-0.9995.1 (miktex 2.8) (preloaded format=xelatex 2012.4.3) 11 nov 2012 22:12 entering extended mode **template_2.tex ("c:userszahradesktopxepersian file - copy emplate_2.tex" latex2e <2009/09/24> babel and hyphenation patterns for english, dumylang, nohyphenation, ge rman, ngerman, german-x-2009-06-19, ngerman-x-2009-06-19, french, loaded. ("c:...

2004
Anurag Kumar Gupta Tasos Anastasakos

Natural interaction in multimodal dialogue systems demands quick system response after the end of a user turn. The prediction of the end of user input at each multimodal dialog turn is complicated as users can interact through modalities in any order, and convey a variety of different messages to the system within the turn. Several multimodal interaction frameworks have used fixed-duration time...

2005
Arjan J.F. Kok Robert van Liere

The object oriented Visualization Toolkit (VTK) is widely used for scientific visualization. VTK is a visualization library that provides functions for presenting 3D data. Interaction with the visualized data is done by mouse and keyboard. Support for three-dimensional and multimodal input is non-existent. This paper describes VR-VTK: a multimodal interface to VTK on a desktop virtual environme...

2010
Daniel Sonntag Bogdan Sacaleanu

Over the last several years, speech-based question answering (QA) has become very popular in contrast to pure search engine based approaches on a desktop. Open-domain QA systems are now much more powerful and precise, and they can be used in speech applications. Speech-based question answering systems often rely on predefined grammars for speech understanding. In order to improve the coverage o...

2003
A. Corradini

In this paper, we address the modality integration issue on the example of a system that aims at enabling users to combine their speech and 2D gestures when interacting with life-like characters in an educative game context. In a preliminary limited fashion, we investigate and present the use of combined input speech, 2D gesture and environment entities for user system interaction.

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید