نتایج جستجو برای: facial gestures
تعداد نتایج: 68477 فیلتر نتایج به سال:
The flow of spoken interaction between human interlocutors is a widely studied topic. Amongst other things, studies have shown that we use a number of facial gestures to improve this flow – for example to control the taking of turns. This type of gestures ought to be useful in systems where an animated talking head is used, be they systems for computer mediated human-human dialogue or spoken di...
In our current research we use CBR to identify the emotional state of an user during her interaction with a recommender system by analysing pictures of her momentary facial expression. In a previous work [2] we introduced PhotoMood, a CBR system that uses gestures to identify emotions from the user face self-pictures, and presented preliminary experiments analysing only the external mouth conto...
Perceptual user interfaces are becoming important nowadays, because they offer a more natural interaction with the computer via speech recognition, haptics, computer vision techniques and so on. In this paper we present a visual-based interface (VBI) that analyzes users’ facial gestures and motion. This interface works in real-time and gets the images from a conventional webcam. Due to this, it...
Emotional expressions are the behaviors that communicate our emotional state or attitude to others. They expressed through verbal and non-verbal communication. Complex human behavior can be understood by studying physical features from multiple modalities; mainly facial, vocal gestures. Recently, spontaneous multi-modal emotion recognition has been extensively studied for analysis. In this pape...
This paper addresses the problem of recognition of emotional facial gestures from static images in thumbnail resolution. More experiments are presented, a holistic and two local approaches using SVM’s as classifier engines. The experimental results related to the application of our method are reported.
The current work investigates issues of expressivity and personality traits for Embodied Conversational Agents in environments that allow for dynamic interactions with human users. Such environments are defined and modelled with the use of state of the art game engine technology. We focus on generating simple ECA behaviours, comprised of facial expressions and gestures in a well defined context...
This paper describes the synthesis of sign-language animation for mobile environments. Sign language is synthesized by using either the motion-capture or motion-primitive method. An editing system can add facial expressions, mouth shapes and gestures to the sign-language CG animation. Sign-language animation is displayed on PDA screens to inform the user of his/her mobile environment.
We have developed a general purpose use and modular architecture of an Embodied Conversational Agent (ECA) called Greta. Our 3D agent is able to communicate using verbal and nonverbal channels like gaze, head and torso movements, facial expressions and gestures. It follows the SAIBA framework [10] and the MPEG4 [6] standards. Our system is optimized to be used in interactive applications.
Which tools will be available for tomorrow's usability tester and HCI researcher? The multimodal interfaces of future applications (driven by speech, facial expressions, gestures, eye movement, physiological data, etc.), remote tests and multiuser applications pose new challenges with respect to data collection and analysis. During this session their usefulness will be discussed.
Can we create a virtual storyteller that is expressive enough to convey in a natural way a story to an audience? What are the most important features for creating such character? This paper presents a study where the influence of different modalities in the perception of a story told by both a synthetic storyteller and a real one are analyzed. In order to evaluate it, three modes of communicati...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید