نتایج جستجو برای: multimodal
تعداد نتایج: 31670 فیلتر نتایج به سال:
This report presents our approach for multimodal interaction in the COVEN virtual environments and describes the functions and architecture of the multimodal interaction techniques modules that have been developed so far. A first milestone was reached in our work, with the completion of a first set of modules focusing on generic techniques. Both the overall functional architecture and the proce...
The mammalian cerebellum is a highly multimodal structure, receiving inputs from multiple sensory modalities and integrating them during complex sensorimotor coordination tasks. Previously, using cell-type-specific anatomical projection mapping, it was shown that multimodal pathways converge onto individual cerebellar granule cells (Huang et al., 2013). Here we directly measure synaptic current...
This paper discusses an evaluation of an augmented reality (AR) multimodal interface that uses combined speech and paddle gestures for interaction with virtual objects in the real world. We briefly describe our AR multimodal interface architecture and multimodal fusion strategies that are based on the combination of time-based and domain semantics. Then, we present the results from a user study...
Two important themes in current work on interfaces are multimodal interaction and the use of dialogue. Human multimodal dialogues are symmetric, i.e., both participants communicate multimodally. We describe a proof of concept system that supports symmetric multimodal communication for speech and sketching in the domain of simple mechanical device design. We discuss three major aspects of the co...
Multimodal reference resolution is a process that automatically identifies what users refer to during multimodal human-machine conversation. Given the substantial work on multimodal reference resolution; it is important to evaluate the current state of the art, understand the limitations, and identify directions for future improvement. We conducted a series of user studies to evaluate the capab...
The development of multimodal interfaces and algorithms for multimodal integration requires knowledge of integration patterns that represent how people use multiple modalities. We analyzed multimodal interaction with three different applications. Semantic analysis revealed that multimodal inputs can exhibit cooperation other than complementary and redundancy. Analysis of the relationship betwee...
Interfaces for mobile information access need to allow users flexibility in their choice of modes and interaction style in accordance with their preferences, the task at hand, and their physical and social environment. This paper describes the approach to multimodal language processing in MATCH (Multimodal Access To City Help), a mobile multimodal speech-pen interface to restaurant and subway i...
how do english as a lingua franca (elf) speakers achieve multimodal cohesion on the basis of their specific interests and cultural backgrounds? from a dialogic and collaborative view of communication, this study focuses on how verbal and nonverbal modes cohere together during intercultural conversations. the data include approximately 160-minute transcribed video recordings of elf interactions ...
the goal of this paper is to develop a decision support system (dss) as a journey planner in complex and large multimodal urban network called rahyar. rahyar attempts to identify the most desirable itinerary among all feasible alternatives. the desirability of an itinerary is measured by a disutility function, which is defined as a weighted sum of some criteria. the weights represent travelers’...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید