نتایج جستجو برای: facial animation

تعداد نتایج: 75517  

Journal: :IEEE Trans. Circuits Syst. Video Techn. 1999
Fabio Lavagetto Roberto Pockaj

In this paper we propose a method for implementing a high-level interface for the synthesis and animation of animated virtual faces that is in full compliance with MPEG-4 specifications. This method allows us to implement the Simple Facial Object profile and part of the Calibration Facial Object Profile. In fact, starting from a facial wire-frame and from a set of configuration files, the devel...

2001
Zsófia Ruttkay Han Noot

The animation of synthetic faces is still a low-level process requiring much human expertise and hw/sw resources. The constraint-based facial animation editor system, FESINC, provides two kinds of support: allows the a-priory, declarative definition of dynamical expressions and requirements, and assures that while making the animation, these requirements are fulfilled. The novelty of the approa...

2008
Jixu Chen

This report proposed an automatic face animation method. First, 28 features facial features are automatically extracted from the videorecorded face. Then, using a linear model, we can decompose the variation of the 28 facial features into the shape variation and the expression variation. Finally, the expression variation is used to control the animation of the target face. All the tracking and ...

Journal: :Journal of Visualization and Computer Animation 2008
Xuecheng Liu Tianlu Mao Shihong Xia Yong Yu Zhaoqi Wang

This paper presents an efficient method to construct optimal facial animation blendshapes from given blendshape sketches and facial motion capture data. At first, a mapping function is established between “Marker Face” of target and performer by RBF interpolating selected feature points. Sketched blendshapes are transferred to performer’s “Marker Face” by using motion vector adjustment techniqu...

2014
Mohammed Hazim Alkawaz Ahmad Hoirul Basori Dzulkifli Mohamad Farhan Mohamed

Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid prop...

1998
Zsófia Ruttkay Paul J. W. ten Hagen Han Noot Mark Savenije

Performer-driven animation has been used with success [1], rst of all to reproduce human body motion. While there are di erent capturing hardware-software systems to map the motion of a performer on the motion of a model of the body or face, little has been done both on the technical and on the theoretical level to support the inventive re-use of captured data. The topic of this paper is an ani...

2000
Shiguang Shan Wen Gao Jie Yan Hongming Zhang Xilin Chen

In the paper, a methodology for individual face synthesis using given orthogonal photos is proposed. And an integrated speech-driven facial animation system is presented. Firstly, in order to capture given subject’s personal facial configuration, a novel coarse-to-fine strategy based on facial texture and deformable template is proposed to localize some facial feature points in the image of fro...

2002
Irene Albrecht Jörg Haber Hans-Peter Seidel

Speech synchronized facial animation that controls only the movement of the mouth is typically perceived as wooden and unnatural. We propose a method to generate additional facial expressions such as movement of the head, the eyes, and the eyebrows fully automatically from the input speech signal. This is achieved by extracting prosodic parameters such as pitch flow and power spectrum from the ...

2012
François Rocca Thierry Ravet Joëlle Tilmanne

This project aims to review existing approaches to animate in real time the facial features of a virtual character. We investigate a tool that will be usable for artists in live performances or to facilitate the creation of animation movies thanks to a quick preview of the results. The animation data are obtained by facial motion capture instrumentation (marker-based or markerless) or are gener...

2006
Yuru Pei Hongbin Zha

We present a novel method to transfer speech animation recorded in low resolution videos onto realistic 3D facial models. Unsupervised learning is utilized on a speech video corpus to find underlying manifold of facial configurations. K-means clustering is applied on the low dimensional space to find key speaking-related facial shapes. With a small set of laser scanner captured 3D models relate...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید