TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/CAGE: Circumplex Affect Guided Expression Inference

CAGE: Circumplex Affect Guided Expression Inference

Niklas Wagner, Felix Mätzler, Samed R. Vossberg, Helen Schneider, Svetlana Pavlitska, J. Marius Zöllner

2024-04-23Dominance EstimationValence EstimationFacial Expression Recognition (FER)Arousal EstimationEmotion Recognition
PaperPDFCode(official)

Abstract

Understanding emotions and expressions is a task of interest across multiple disciplines, especially for improving user experiences. Contrary to the common perception, it has been shown that emotions are not discrete entities but instead exist along a continuum. People understand discrete emotions differently due to a variety of factors, including cultural background, individual experiences, and cognitive biases. Therefore, most approaches to expression understanding, particularly those relying on discrete categories, are inherently biased. In this paper, we present a comparative in-depth analysis of two common datasets (AffectNet and EMOTIC) equipped with the components of the circumplex model of affect. Further, we propose a model for the prediction of facial expressions tailored for lightweight applications. Using a small-scaled MaxViT-based model architecture, we evaluate the impact of discrete expression category labels in training with the continuous valence and arousal labels. We show that considering valence and arousal in addition to discrete category labels helps to significantly improve expression inference. The proposed model outperforms the current state-of-the-art models on AffectNet, establishing it as the best-performing model for inferring valence and arousal achieving a 7% lower RMSE. Training scripts and trained weights to reproduce our results can be found here: https://github.com/wagner-niklas/CAGE_expression_inference.

Results

TaskDatasetMetricValueModel
Facial Recognition and ModellingAffectNetAccuracy (7 emotion)66.6CAGE
Facial Recognition and ModellingAffectNetAccuracy (8 emotion)62.2CAGE
Emotion RecognitionEMOTICTop-3 Accuracy (%)14.73CAGE
Face ReconstructionAffectNetAccuracy (7 emotion)66.6CAGE
Face ReconstructionAffectNetAccuracy (8 emotion)62.2CAGE
Facial Expression Recognition (FER)AffectNetAccuracy (7 emotion)66.6CAGE
Facial Expression Recognition (FER)AffectNetAccuracy (8 emotion)62.2CAGE
3DAffectNetAccuracy (7 emotion)66.6CAGE
3DAffectNetAccuracy (8 emotion)62.2CAGE
3D Face ModellingAffectNetAccuracy (7 emotion)66.6CAGE
3D Face ModellingAffectNetAccuracy (8 emotion)62.2CAGE
3D Face ReconstructionAffectNetAccuracy (7 emotion)66.6CAGE
3D Face ReconstructionAffectNetAccuracy (8 emotion)62.2CAGE

Related Papers

Long-Short Distance Graph Neural Networks and Improved Curriculum Learning for Emotion Recognition in Conversation2025-07-21Camera-based implicit mind reading by capturing higher-order semantic dynamics of human gaze within environmental context2025-07-17A Robust Incomplete Multimodal Low-Rank Adaptation Approach for Emotion Recognition2025-07-15Dynamic Parameter Memory: Temporary LoRA-Enhanced LLM for Long-Sequence Emotion Recognition in Conversation2025-07-11CAST-Phys: Contactless Affective States Through Physiological signals Database2025-07-08Exploring Remote Physiological Signal Measurement under Dynamic Lighting Conditions at Night: Dataset, Experiment, and Analysis2025-07-06Multimodal Prompt Alignment for Facial Expression Recognition2025-06-26How to Retrieve Examples in In-context Learning to Improve Conversational Emotion Recognition using Large Language Models?2025-06-25