TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Exploring Emotion Features and Fusion Strategies for Audio...

Exploring Emotion Features and Fusion Strategies for Audio-Video Emotion Recognition

Hengshun Zhou, Debin Meng, Yuanyuan Zhang, Xiaojiang Peng, Jun Du, Kai Wang, Yu Qiao

2020-12-27Facial Expression Recognition (FER)Video Emotion RecognitionEmotion Recognition
PaperPDF

Abstract

The audio-video based emotion recognition aims to classify a given video into basic emotions. In this paper, we describe our approaches in EmotiW 2019, which mainly explores emotion features and feature fusion strategies for audio and visual modality. For emotion features, we explore audio feature with both speech-spectrogram and Log Mel-spectrogram and evaluate several facial features with different CNN models and different emotion pretrained strategies. For fusion strategies, we explore intra-modal and cross-modal fusion methods, such as designing attention mechanisms to highlights important emotion feature, exploring feature concatenation and factorized bilinear pooling (FBP) for cross-modal feature fusion. With careful evaluation, we obtain 65.5% on the AFEW validation set and 62.48% on the test set and rank third in the challenge.

Results

TaskDatasetMetricValueModel
Facial Recognition and ModellingFER+Accuracy89.257LResNet50E-IR
Facial Recognition and ModellingAffectNetAccuracy (8 emotion)53.925LResNet50E-IR
Face ReconstructionFER+Accuracy89.257LResNet50E-IR
Face ReconstructionAffectNetAccuracy (8 emotion)53.925LResNet50E-IR
Facial Expression Recognition (FER)FER+Accuracy89.257LResNet50E-IR
Facial Expression Recognition (FER)AffectNetAccuracy (8 emotion)53.925LResNet50E-IR
3DFER+Accuracy89.257LResNet50E-IR
3DAffectNetAccuracy (8 emotion)53.925LResNet50E-IR
3D Face ModellingFER+Accuracy89.257LResNet50E-IR
3D Face ModellingAffectNetAccuracy (8 emotion)53.925LResNet50E-IR
3D Face ReconstructionFER+Accuracy89.257LResNet50E-IR
3D Face ReconstructionAffectNetAccuracy (8 emotion)53.925LResNet50E-IR

Related Papers

Long-Short Distance Graph Neural Networks and Improved Curriculum Learning for Emotion Recognition in Conversation2025-07-21Camera-based implicit mind reading by capturing higher-order semantic dynamics of human gaze within environmental context2025-07-17A Robust Incomplete Multimodal Low-Rank Adaptation Approach for Emotion Recognition2025-07-15Dynamic Parameter Memory: Temporary LoRA-Enhanced LLM for Long-Sequence Emotion Recognition in Conversation2025-07-11CAST-Phys: Contactless Affective States Through Physiological signals Database2025-07-08Exploring Remote Physiological Signal Measurement under Dynamic Lighting Conditions at Night: Dataset, Experiment, and Analysis2025-07-06Multimodal Prompt Alignment for Facial Expression Recognition2025-06-26How to Retrieve Examples in In-context Learning to Improve Conversational Emotion Recognition using Large Language Models?2025-06-25