TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Deep Neural Network Augmentation: Generating Faces for Aff...

Deep Neural Network Augmentation: Generating Faces for Affect Analysis

Dimitrios Kollias, Shiyang Cheng, Evangelos Ververas, Irene Kotsia, Stefanos Zafeiriou

2018-11-12Data AugmentationFacial Expression Recognition (FER)Face Generation
PaperPDF

Abstract

This paper presents a novel approach for synthesizing facial affect; either in terms of the six basic expressions (i.e., anger, disgust, fear, joy, sadness and surprise), or in terms of valence (i.e., how positive or negative is an emotion) and arousal (i.e., power of the emotion activation). The proposed approach accepts the following inputs: i) a neutral 2D image of a person; ii) a basic facial expression or a pair of valence-arousal (VA) emotional state descriptors to be generated, or a path of affect in the 2D VA Space to be generated as an image sequence. In order to synthesize affect in terms of VA, for this person, $600,000$ frames from the 4DFAB database were annotated. The affect synthesis is implemented by fitting a 3D Morphable Model on the neutral image, then deforming the reconstructed face and adding the inputted affect, and blending the new face with the given affect into the original image. Qualitative experiments illustrate the generation of realistic images, when the neutral image is sampled from thirteen well known lab-controlled or in-the-wild databases, including Aff-Wild, AffectNet, RAF-DB; comparisons with Generative Adversarial Networks (GANs) show the higher quality achieved by the proposed approach. Then, quantitative experiments are conducted, in which the synthesized images are used for data augmentation in training Deep Neural Networks to perform affect recognition over all databases; greatly improved performances are achieved when compared with state-of-the-art methods, as well as with GAN-based data augmentation, in all cases.

Results

TaskDatasetMetricValueModel
Facial Recognition and ModellingRAF-DBAvg. Accuracy77.5VGG-FACE
Facial Recognition and ModellingAffectNetAccuracy (8 emotion)60.4VGG-FACE
Face ReconstructionRAF-DBAvg. Accuracy77.5VGG-FACE
Face ReconstructionAffectNetAccuracy (8 emotion)60.4VGG-FACE
Facial Expression Recognition (FER)RAF-DBAvg. Accuracy77.5VGG-FACE
Facial Expression Recognition (FER)AffectNetAccuracy (8 emotion)60.4VGG-FACE
3DRAF-DBAvg. Accuracy77.5VGG-FACE
3DAffectNetAccuracy (8 emotion)60.4VGG-FACE
3D Face ModellingRAF-DBAvg. Accuracy77.5VGG-FACE
3D Face ModellingAffectNetAccuracy (8 emotion)60.4VGG-FACE
3D Face ReconstructionRAF-DBAvg. Accuracy77.5VGG-FACE
3D Face ReconstructionAffectNetAccuracy (8 emotion)60.4VGG-FACE

Related Papers

Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Non-Adaptive Adversarial Face Generation2025-07-16Data Augmentation in Time Series Forecasting through Inverted Framework2025-07-15Iceberg: Enhancing HLS Modeling with Synthetic Data2025-07-14AI-Enhanced Pediatric Pneumonia Detection: A CNN-Based Approach Using Data Augmentation and Generative Adversarial Networks (GANs)2025-07-13FreeAudio: Training-Free Timing Planning for Controllable Long-Form Text-to-Audio Generation2025-07-11