Realizing Audio-Visually Triggered ELIZA-Like Non-verbal Behaviors

نویسندگان

  • Hiroshi G. Okuno
  • Kazuhiro Nakadai
  • Hiroaki Kitano
چکیده

We are studying how to create social physical agents, i.e., humanoids, that perform actions empowered by real-time audio-visual tracking of multiple talkers. Social skills require complex perceptual and motor capabilities as well as communicating ones. It is critical to identify primary features in designing building blocks for social skills, because performance of social interaction is usually evaluated as a whole system but not as each component. We investigate the minimum functionalities for social interaction, supposed that a humanoid is equipped with auditory and visual perception and simple motor control but not with sound output. Real-time audio-visual multiple-talker tracking system is implemented on the humanoid, SIG, by using sound source localization, stereo vision, face recognition, and motor control. It extracts either auditory or visual streams and associates audio and visual streams by the proximity in localization. Socially-oriented attention control makes the best use of personality variations classified by the Interpersonal Theory of psychology. It also provides task-oriented funcitons with decaying factor of belief for each stream. We demonstrate that the resulting behavior of SIG invites the users’ participation in interaction and encourages the users to explore SIG’s behaviors. These demonstrations show that SIG behaves like a physical non-verbal Eliza.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Realizing personality in audio-visually triggered non-verbal behaviors

Cantmlling mbot behaviors becomes more important recently as active perception for mbot, in particular active audition in addition to active vision, has made remarkable progress. We are studying how to ereale social humanoids that perform actions empowered by real-time audio-visual tracking of multiple talkers. In this paper, we present personality as a means ofcontmlling non-verbal behaviors. ...

متن کامل

Non-Verbal Eliza-like Human Behaviors in Human-Robot Interaction through Real-Time Auditory and Visual Multiple-Talker Tracking

Sound is essential to enhance visual experience and human robot interaction, but usually most research and development efforts are made mainly towards sound generation, speech synthesis and speech recognition. The reason why only a little attention has been paid on auditory scene analysis is that real-time perception of a mixture of sounds is difficult. Recently, Nakadai et al have developed re...

متن کامل

Synface - verbal and non-verbal face animation from audio

We give an overview of SynFace, a speech-driven face animation system originally developed for the needs of hard-of-hearing users of the telephone. For the 2009 LIPS challenge, SynFace includes not only articulatory motion but also non-verbal motion of gaze, eyebrows and head, triggered by detection of acoustic correlates of prominence and cues for interaction control. In perceptual evaluations...

متن کامل

Sonic Tennis: a rhythmic interaction game for mobile devices

This paper presents an audio-based tennis simulation game for mobile devices, which uses motion input and non-verbal audio feedback as exclusive means of interaction. Players have to listen carefully to the provided auditory clues, like racquet hits and ball bounces, rhythmically synchronizing their movements in order to keep the ball into play. The device can be swung freely and act as a full-...

متن کامل

NOCOA+: Multimodal Computer-Based Training for Social and Communication Skills

Non-verbal communication incorporating visual, audio, and contextual information is important to make sense of and navigate the social world. Individuals who have trouble with social situations often have difficulty recognizing these sorts of non-verbal social signals. In this article, we propose a training tool NOCOA+ (Non-verbal COmmuniation for Autism plus) that uses utterances in visual and...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002