TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ExpNet: Landmark-Free, Deep, 3D Facial Expressions

ExpNet: Landmark-Free, Deep, 3D Facial Expressions

Feng-Ju Chang, Anh Tuan Tran, Tal Hassner, Iacopo Masi, Ram Nevatia, Gerard Medioni

2018-02-023D Facial Expression RecognitionFacial Landmark Detection3D Face ReconstructionEmotion Recognition
PaperPDFCode(official)

Abstract

We describe a deep learning based method for estimating 3D facial expression coefficients. Unlike previous work, our process does not relay on facial landmark detection methods as a proxy step. Recent methods have shown that a CNN can be trained to regress accurate and discriminative 3D morphable model (3DMM) representations, directly from image intensities. By foregoing facial landmark detection, these methods were able to estimate shapes for occluded faces appearing in unprecedented in-the-wild viewing conditions. We build on those methods by showing that facial expressions can also be estimated by a robust, deep, landmark-free approach. Our ExpNet CNN is applied directly to the intensities of a face image and regresses a 29D vector of 3D expression coefficients. We propose a unique method for collecting data to train this network, leveraging on the robustness of deep networks to training label noise. We further offer a novel means of evaluating the accuracy of estimated expression coefficients: by measuring how well they capture facial emotions on the CK+ and EmotiW-17 emotion recognition benchmarks. We show that our ExpNet produces expression coefficients which better discriminate between facial emotions than those obtained using state of the art, facial landmark detection techniques. Moreover, this advantage grows as image scales drop, demonstrating that our ExpNet is more robust to scale changes than landmark detection methods. Finally, at the same level of accuracy, our ExpNet is orders of magnitude faster than its alternatives.

Results

TaskDatasetMetricValueModel
Facial Recognition and Modelling2017_test set14 gestures accuracy2aan
Facial Recognition and ModellingREALYall2.306ExpNet
Facial Recognition and ModellingREALY (side-view)all2.476ExpNet
Face ReconstructionREALYall2.306ExpNet
Face ReconstructionREALY (side-view)all2.476ExpNet
Face Reconstruction2017_test set14 gestures accuracy2aan
Facial Expression Recognition (FER)2017_test set14 gestures accuracy2aan
3DREALYall2.306ExpNet
3DREALY (side-view)all2.476ExpNet
3D2017_test set14 gestures accuracy2aan
3D Face Modelling2017_test set14 gestures accuracy2aan
3D Face ModellingREALYall2.306ExpNet
3D Face ModellingREALY (side-view)all2.476ExpNet
3D Face ReconstructionREALYall2.306ExpNet
3D Face ReconstructionREALY (side-view)all2.476ExpNet
3D Face Reconstruction2017_test set14 gestures accuracy2aan

Related Papers

Long-Short Distance Graph Neural Networks and Improved Curriculum Learning for Emotion Recognition in Conversation2025-07-21Camera-based implicit mind reading by capturing higher-order semantic dynamics of human gaze within environmental context2025-07-17A Robust Incomplete Multimodal Low-Rank Adaptation Approach for Emotion Recognition2025-07-15Dynamic Parameter Memory: Temporary LoRA-Enhanced LLM for Long-Sequence Emotion Recognition in Conversation2025-07-11CAST-Phys: Contactless Affective States Through Physiological signals Database2025-07-08Exploring Remote Physiological Signal Measurement under Dynamic Lighting Conditions at Night: Dataset, Experiment, and Analysis2025-07-06How to Retrieve Examples in In-context Learning to Improve Conversational Emotion Recognition using Large Language Models?2025-06-25MATER: Multi-level Acoustic and Textual Emotion Representation for Interpretable Speech Emotion Recognition2025-06-24