Reliable tracking of 3D deformable faces is a challenging task in computer vision. For one, the shapes of faces change dramatically with various identities, poses and expressions. For the other, poor lighting conditions may cause a low contrast image or cast shadows on faces, which will significantly degrade the performance of the tracking system. We develop a non-intrusive system for real-time facial tracking with a single web camera.
Face Tracking with Kinect
We develop a framework to track face shapes by using both color and depth information. Since the faces in various poses lie on a nonlinear manifold, we build piecewise linear face models, each model covering a range of poses. The low-resolution depth image is captured by using Microsoft Kinect, and is used to predict head pose and generate extra constraints at the face boundary. Our experiments show that, by exploiting the depth information, the performance of the tracking system is significantly improved.
Facial expressions play significant roles in our daily communication. Recognizing these expressions has extensive applications, such as human-computer interface, multimedia, and security. However, as the basis of expression recognition, the exploration of the underline functional facial features is still an open problem. Studies in psychology show that facial features of expressions are located around mouth, nose, and eyes, and their locations are essential for explaining and categorizing facial expressions. Moreover, expressions can be forcedly categorized into six popular "basic expressions": anger, disgust, fear, happiness, sadness and surprise. Each of these basic expressions can be further decomposed into a set of several related action units (AUs).
We develop a non-intrusive system for monitoring fatigue by tracking eyelids with a single web camera. Tracking slow eyelid closures is one of the most reliable ways to monitor fatigue during critical performance tasks. The challenges come from arbitrary head movement, occlusion, reflection of glasses, motion blurs, etc. We model the shape of eyes using a pair of parameterized parabolic curves, and fit the model in each frame to maximize the total likelihood of the eye regions. Our system is able to track face movement and fit eyelids reliably in real time. We test our system with videos captured from both alert and drowsy subjects. The experiment results prove the effectiveness of our system.
Dyadic Synchrony as a Measure of Trust and Veracity
We investigate how degree of interactional synchrony can signal whether trust is present, absent, increasing or declining. We propose an automated, data-driven and unobtrusive framework for deception detection and analysis in interrogation interviews from visual cues only. Our framework consists of the face tracking, the gesture detection, the expression recognition, and the synchrony estimation. This framework is able to automatically track gestures and expressions of both the subject and the interviewer, extract normalized meaningful synchrony features and learn classification models for deception recognition. To validate these proposed synchrony features, extensive experiments have been conducted on a database of $242$ video samples, and shown that these features are very effective at detecting deceptions.