Video-driven state-aware facial animation
نویسندگان
چکیده
It is important in computer animation to synthesize expressive facial animation for avatars from videos. Some traditional methods track a set of semantic feature points on the face to drive the avatar. However, these methods usually suffer from inaccurate detection and sparseness of the feature points and fail to obtain high-level understanding of facial expressions, leading to less expressive and even wrong expressions on the avatar. In this paper, we propose a state-aware synthesis framework. Instead of simply fitting 3D face to the 2D feature points, we use expression states obtained by a set of lowcost classifiers (based on local binary pattern and support vector machine) on the face texture to guide the face fitting procedure. Our experimental results show that the proposed hybrid framework enjoys the advantages of the original methods based on feature point and the awareness of the expression states of the classifiers and thus vivifies and enriches the face expressions of the avatar. Copyright © 2012 John Wiley & Sons, Ltd.
منابع مشابه
Speech-driven 3d facial animation for mobile entertainment
This paper presents an entertainment-oriented application for mobile service, which generates customized speech-driven 3D facial animation and delivers to end-user by MMS (Multimedia Messaging Service). Some important methods of this application are discussed, including the 3D facial model based on 3 photos, the 3D facial animation driven by speech or text on-line and the video format transform...
متن کاملTalking Head: Synthetic Video Facial Animation in MPEG-4
We present a system for facial modeling and animation that aims at the generation of photo-realistic models and performance driven animation. It is practical implementation of MPEG-4 compliant Synthetic Video Facial Animation pipeline (Simple and Calibration Profiles with some modifications), which includes: facial features recognition & tracking on real video sequence; obtaining, encoding, net...
متن کاملPerformance Driven Facial Animation using Blendshape Interpolation
This paper describes a method of creating facial animation using a combination of motion capture data and blendshape interpolation. An animator can design a character as usual, but use motion capture data to drive facial animation, rather than animate by hand. The method is effective even when the motion capture actor and the target model have quite different shapes. The process consists of sev...
متن کاملVisual speech synthesis from 3D video
Data-driven approaches to 2D facial animation from video have achieved highly realistic results. In this paper we introduce a process for visual speech synthesis from 3D video capture to reproduce the dynamics of 3D face shape and appearance. Animation from real speech is performed by path optimisation over a graph representation of phonetically segmented captured 3D video. A novel similarity m...
متن کاملFaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality
We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. In addition to these face reconstruction components, FaceVR in...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Journal of Visualization and Computer Animation
دوره 23 شماره
صفحات -
تاریخ انتشار 2012