نتایج جستجو برای: multimodal input

تعداد نتایج: 250965  

Journal: :Behaviour & Information Technology 1999

Journal: :Behaviour & IT 1999
Simeon Keates Peter Robinson

2003
Anurag Gupta

Multimodal dialogue systems allow users to input information in multiple modalities. These systems can handle simultaneous or sequential composite multimodal input. Different coordination schemes require such systems to capture, collect and integrate user input in different modalities, and then respond to a joint interpretation. We performed a study to understand the variability of input in mul...

2002
Michael Johnston Srinivas Bangalore Gunaranjan Vasireddy Amanda Stent Patrick Ehlen Marilyn A. Walker Steve Whittaker Preetam Maloor

Mobile interfaces need to allow the user and system to adapt their choice of communication modes according to user preferences, the task at hand, and the physical and social environment. We describe a multimodal application architecture which combines finite-state multimodal language processing, a speech-act based multimodal dialogue manager, dynamic multimodal output generation, and user-tailo...

2009
Srinivas Bangalore Michael Johnston

Multimodal grammars provide an effective mechanism for quickly creating integration and understanding capabilities for interactive systems supporting simultaneous use of multiple input modalities. However, like other approaches based on hand-crafted grammars, multimodal grammars can be brittle with respect to unexpected, erroneous, or disfluent input. In this article, we show how the finite-sta...

2014
Michael Johnston John Chen Patrick Ehlen Hyuckchul Jung Jay Lieske Aarthi M. Reddy Ethan Selfridge Svetlana Stoyanchev Brant Vasilieff Jay G. Wilpon

The Multimodal Virtual Assistant (MVA) is an application that enables users to plan an outing through an interactive multimodal dialog with a mobile device. MVA demonstrates how a cloud-based multimodal language processing infrastructure can support mobile multimodal interaction. This demonstration will highlight incremental recognition, multimodal speech and gesture input, contextually-aware l...

2006
Xiao Huang Sharon L. Oviatt Rebecca Lunsford

Temporal as well as semantic constraints on fusion are at the heart of multimodal system processing. The goal of the present work is to develop useradaptive temporal thresholds with improved performance characteristics over state-of-the-art fixed ones, which can be accomplished by leveraging both empirical user modeling and machine learning techniques to handle the large individual differences ...

Journal: :Applied Artificial Intelligence 1999
Yasuyuki Kono Takehide Yano Tetsuro Chino Kaoru Suzuki Hiroshi Kanazawa

Two requirements should be met in order to develop a practical multimodal interface system, i.e., (1) integration of delayed arrival of data, and (2) elimination of ambiguity in recognition results of each modality. This paper presents an e cient and generic methodology for interpretation of multimodal input to satisfy these requirements. The proposed methodology can integrate delayed-arrival d...

2007
Manuel Giuliani Alois Knoll

Multimodal systems must process several input streams efficiently and represent the input in a way that allows the establishment of connections between modalities. This paper describes a multimodal system that uses Combinatory Categorial Grammars to parse several input streams and translate them into logical formulas. These logical formulas are expressed in Hybrid Logic, which is very suitable ...

2000
Michael Johnston Srinivas Bangalore

Multimodal interfaces require effective parsing and understanding of utterances whose content is distributed across multiple input modes. Johnston 1998 presents an approach in which strategies for multimodal integration are stated declaratively using a unification-based grammar that is used by a multidimensional chart parser to compose inputs. This approach is highly expressive and supports a b...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید