Realizing Multimodal Behavior - Closing the Gap between Behavior Planning and Embodied Agent Presentation

نویسندگان

  • Michael Kipp
  • Alexis Héloir
  • Marc Schröder
  • Patrick Gebhard
چکیده

Generating coordinated multimodal behavior for an embodied agent (speech, gesture, facial expression. . . ) is challenging. It requires a high degree of animation control, in particular when reactive behaviors are required. We suggest to distinguish realization planning, where gesture and speech are processed symbolically using the behavior markup language (BML), and presentation which is controlled by a lower level animation language (EMBRScript). Reactive behaviors can bypass planning and directly control presentation. In this paper, we show how to define a behavior lexicon, how this lexicon relates to BML and how to resolve timing using formal constraint solvers. We conclude by demonstrating how to integrate reactive emotional behaviors.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Towards a Common Framework for Multimodal Generation: The Behavior Markup Language

This paper describes an international effort to unify a multimodal behavior generation framework for Embodied Conversational Agents (ECAs). We propose a three stage model we call SAIBA where the stages represent intent planning, behavior planning and behavior realization. A Function Markup Language (FML), describing intent without referring to physical behavior, mediates between the first two s...

متن کامل

On the use of the multimodal clues in observed human behavior for the modeling of agent cooperative behavior

We introduce TYCOON a framework we are developing for the analysis of human verbal and non-verbal behavior. This framework includes a typology made of six primitive types of cooperation between communicative modalities: equivalence, specialization, transfer, redundancy, complementarity and concurrency. We have used this typology when annotating videotaped multimodal humancomputer interaction an...

متن کامل

Automatic Generation of Gaze and Gestures for Dialogues between Embodied Conversational Agents: System Description and Study on Gaze Behavior

In this paper we introduce a system that automatically adds different types of non-verbal behavior to a given dialogue script between two virtual embodied agents. It allows us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents’ gaze behavior is informed by theories of human face-to-face gaze behavior. Gesture...

متن کامل

Automatic Generation of Gaze and Gestures for Dialogues between Embodied Conversational Agents

In this paper we introduce a system that automatically adds different types of non-verbal behavior to a given dialogue script between two virtual embodied agents. It allows us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents’ gaze behavior is informed by theories of human face-to-face gaze behavior. Gesture...

متن کامل

Multimodal Expressive Embodied Conversational Agents [Multimodal expressive ECAs]

In this paper we present our work toward the creation of a multimodal expressive Embodied Conversational Agent (ECA). Our agent, called Greta, exhibits nonverbal behaviors synchronized with speech. We are using the taxonomy of communicative functions developed by Isabella Poggi [22] to specify the behavior of the agent. Based on this taxonomy a representation language, Affective Presentation Ma...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010