Spatiotemporal Fusion Framework for Multi-camera Face Orientation Analysis

نویسندگان

  • Chung-Ching Chang
  • Hamid K. Aghajan
چکیده

In this paper, we propose a collaborative technique for face orientation estimation in smart camera networks. The proposed spatiotemporal feature fusion analysis is based on active collaboration between the cameras in data fusion and decision making using features extracted by each camera. First, a head strip mapping method is proposed based on a Markov model and a Viterbi-like algorithm to estimate the relative angular differences to the face between the cameras. Then, given synchronized face sequences from several camera nodes, the proposed technique determines the orientation and the angular motion of the face using two features, namely the hair-face ratio and the head optical flow. These features yield an estimate of the face orientation and the angular velocity through simple analysis such as Discrete Fourier Transform (DFT) and Least Squares (LS), respectively. Spatiotemporal feature fusion is implemented via key frame detection in each camera, a forward-backward probabilistic model, and a spatiotemporal validation scheme. The key frames are obtained when a camera node detects a frontal face view and are exchanged between the cameras so that local face orientation estimates can be adjusted to maintain a high confidence level. The forward-backward probabilistic model aims to mitigate error propagation in time. Finally, a spatiotemporal validation scheme is applied for spatial outlier removal and temporal smoothing. A face view is interpolated from the mapped head strips, from which snapshots at the desired view angles can be generated. The proposed technique does not require camera locations to be known in prior, and hence is applicable to vision networks deployed casually without localization.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Face Orientation Estimation in Smart Camera Networks

An important motivation for face orientation analysis derives from the fact that most face recognition algorithms require face images with approximately frontal view to operate efficiently, such as principle component analysis (PCA) [6], linear discriminant analysis (LDA) [4], and hidden markov model (HMM) techniques [2]. In a networked camera setting, the desire for a fontal view to pursue an ...

متن کامل

Robust multi-camera view face recognition

This paper presents multi-appearance fusion of Principal Component Analysis (PCA) and generalization of Linear Discriminant Analysis (LDA) for multi-camera view offline face recognition (verification) system. The generalization of LDA has been extended to establish correlations between the face classes in the transformed representation and this is called canonical covariate. The proposed system...

متن کامل

Model-Based Image Segmentation for Multi-view Human Gesture Analysis

Multi-camera networks bring in potentials for a variety of vision-based applications through provisioning of rich visual information. In this paper a method of image segmentation for human gesture analysis in multi-camera networks is presented. Aiming to employ manifold sources of visual information provided by the network, an opportunistic fusion framework is described and incorporated in the ...

متن کامل

Collaborative Face Orientation Detection in Wireless Image Sensor Networks

Most face recognition and tracking techniques employed in surveillance and human-computer interaction (HCI) systems rely on the assumption of a frontal view of the human face. In alternative approaches, knowledge of the orientation angle of the face in captured images can improve the performance of techniques based on non-frontal face views. In this paper, we propose a collaborative technique f...

متن کامل

Pose and Gaze Estimation in Multi-camera Networks for Non-restrictive HCI

Multi-camera networks offer potentials for a variety of novel human-centric applications through provisioning of rich visual information. In this paper, face orientation analysis and posture analysis are combined as components of a human-centered interface system that allows the user’s intentions and region of interest to be estimated without requiring carried or wearable sensors. In pose estim...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007