ANGELICA Choice of output modality in an embodied agent
نویسنده
چکیده
1 Abstract The ANGELICA project addresses the problem of modality choice in information presentation by embodied, human like agents. The output modalities available to such agents include both language and various nonverba l signals such as pointing and gesturing. For each piece of information to be presented by the agent it must be decided whether it should be expressed using language, a nonverbal signal, or both. In the ANGELICA project a model of the different factors inf luencing this choice will be developed and integrated in a natural language generation system. The application domain is the presentation of route descriptions by an embodied agent in a 3D environment. Evaluation and testing form an integral part of the pr oject. In particular, we will investigate the effect of different modality choices on the effectiveness and naturalness of the generated presentations and on the user's perception of the agent's personality.
منابع مشابه
, a . Nijholt Generating Embodied Information Presentations
The output modalities available for information presentation by embodied, human-like agents include both language and various nonverbal cues such as pointing and gesturing. These human, nonverbal modalities can be used to emphasize, extend or even replace the language output produced by the agent. To deal with the interdependence between language and nonverbal signals, their production processe...
متن کاملSmartKom: Symmetric Multimodality in an Adaptive and Reusable Dialogue Shell
We introduce the notion of symmetric multimodality for dialogue systems in which all input modes (eg. speech, gesture, facial expression) are also available for output, and vice versa. A dialogue system with symmetric multimodality must not only understand and represent the user's multimodal input, but also its own multimodal output. We present the SmartKom system, that provides full symmetric ...
متن کاملTowards Symmetric Multimodality: Fusion and Fission of Speech, Gesture, and Facial Expression
We introduce the notion of symmetric multimodality for dialogue systems in which all input modes (eg. speech, gesture, facial expression) are also available for output, and vice versa. A dialogue system with symmetric multimodality must not only understand and represent the user's multimodal input, but also its own multimodal output. We present the SmartKom system, that provides full symmetric ...
متن کاملImpact of Interaction and Output Modality on the Vocabulary Learning and Retention of Iranian EFL Learners
This study investigated the impact of interaction and output modality on vocabulary learning and retention of EFL learners. To investigate the impact of Interaction, solitary (n =69) and collaborative (n =62) groups served as experimental and No Interaction No Output (n =26) as control group. To address the effect of modality, spoken (n =39) and written (n =31) modalities served as experimental...
متن کاملImpact of Interaction and Output Modality on the Vocabulary Learning and Retention of Iranian EFL Learners
This study investigated the impact of interaction and output modality on vocabulary learning and retention of EFL learners. To investigate the impact of Interaction, solitary (n =69) and collaborative (n =62) groups served as experimental and No Interaction No Output (n =26) as control group. To address the effect of modality, spoken (n =39) and written (n =31) modalities served as experimental...
متن کامل