نتایج جستجو برای: multimodal input

تعداد نتایج: 250965  

2002
Laila Dybkjær Stephen Berman Niels Ole Bernsen Jean Carletta Ulrich Heid Joaquim Llisterri Brian Macklin Joaquim LListerri María Machuca Mònica Estruch

for dissemination) This report discusses overall functionality, interface, architecture and platform requirements to a toolset in support of transcription, annotation, information extraction and analysis of natural and multimodal interaction (NIMM) data. NIMM data are corpora of recorded audio and/or visual data from natural and/or multimodal human-human and/or human-system communicative intera...

2012
Katya Alahverdzhieva Dan Flickinger Alex Lascarides

This paper reports on an implementation of a multimodal grammar of speech and co-speech gesture within the LKB/PET grammar engineering environment. The implementation extends the English Resource Grammar (ERG, Flickinger (2000)) with HPSG types and rules that capture the form of the linguistic signal, the form of the gestural signal and their relative timing to constrain the meaning of the mult...

2001
Laila Dybkjær Stephen Berman Niels Ole Bernsen Jean Carletta Ulrich Heid Joaquim Llisterri Brian Macklin Joaquim LListerri María Machuca Mònica Estruch

for dissemination) This report discusses overall functionality, interface, architecture and platform requirements to a toolset in support of transcription, annotation, information extraction and analysis of natural and multimodal interaction (NIMM) data. NIMM data are corpora of recorded audio and/or visual data from natural and/or multimodal human-human and/or human-system communicative intera...

2000
Vladimir Pavlovic Ashutosh Garg James M. Rehg

Inferring users’ actions and intentions forms an integral part of design and development of any human-computer interface. The presence of noisy and at times ambiguous sensory data makes this problem challenging. We formulate a framework for temporal fusion of multiple sensors using input–output dynamic Bayesian networks (IODBNs). We find that contextual information about the state of the comput...

2015
James Neilan Charles Cross Paul Rothhaar Loc Tran Mark Motter Garry Qualls Anna Trujillo Danette Allen

Autonomous decision making in the presence of uncertainly is a deeply studied problem space particularly in the area of autonomous systems operations for land, air, sea, and space vehicles. Various techniques ranging from single algorithm solutions to complex ensemble classifier systems have been utilized in a research context in solving mission critical flight decisions. Realized systems on ac...

1997
Daniela Petrelli Antonella De Angeli Walter Gerbino Giulia Cassano

This paper empirically investigates how humans use reference in space when interacting with a multimodal system able to understand written natural language and pointing with the mouse. We verified that user expertise plays an important role in the use of multimodal systems: experienced users performed 84% multimodal inputs while inexpert only 30%. Moreover experienced are able to efficiently us...

1997
Bernhard Suhm

Recently, the first commercial dictation systems for continuous speech have become available. Although they generally received positive reviews, error correction is still limited to choosing from list of alternatives, speaking again or typing. We developed a set of multimodal interactive correction methods which allow the user to switch modality between continuous speech, spelling, handwriting ...

2007
Marc Bourgois Monica Tavanti

Multimodal interfaces have for quite some time been considered the "interfaces of the future", aiming to allow more natural interaction and offering new opportunities for parallelism and individual capacity increases. This document provides the reader with an overview of multimodal interfaces and of the results of empirical studies assessing users' performance with multimodal systems. The study...

2005
Lori Scarlatos Tony Scarlatos

The human-computer interface is widely recognized as an important part of any software project. Consequently, the principles of human-computer interaction are increasingly being taught in computer science departments. Usually, this type of course will focus on the design, implementation and testing of graphical user interfaces. Yet today’s ubiquitous computing applications require interfaces th...

1998
Toshiyuki Takezawa Tsuyoshi Morimoto

We have built a multimodal-input multimedia-output guidance system called MMGS. The input of a user can be a combination of speech and hand-written gestures. The system, on the other hand, outputs a response that combines speech, three-dimensional graphics, and/or other information. This system can interact cooperatively with the user by resolving ellipses/anaphora and various ambiguities such ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید