TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Dawn of the transformer era in speech emotion recognition:...

Dawn of the transformer era in speech emotion recognition: closing the valence gap

Johannes Wagner, Andreas Triantafyllopoulos, Hagen Wierstorf, Maximilian Schmitt, Felix Burkhardt, Florian Eyben, Björn W. Schuller

2022-03-14FairnessSpeech Emotion RecognitionEmotion Recognition
PaperPDFCode(official)

Abstract

Recent advances in transformer-based architectures which are pre-trained in self-supervised manner have shown great promise in several machine learning tasks. In the audio domain, such architectures have also been successfully utilised in the field of speech emotion recognition (SER). However, existing works have not evaluated the influence of model size and pre-training data on downstream performance, and have shown limited attention to generalisation, robustness, fairness, and efficiency. The present contribution conducts a thorough analysis of these aspects on several pre-trained variants of wav2vec 2.0 and HuBERT that we fine-tuned on the dimensions arousal, dominance, and valence of MSP-Podcast, while additionally using IEMOCAP and MOSI to test cross-corpus generalisation. To the best of our knowledge, we obtain the top performance for valence prediction without use of explicit linguistic information, with a concordance correlation coefficient (CCC) of .638 on MSP-Podcast. Furthermore, our investigations reveal that transformer-based architectures are more robust to small perturbations compared to a CNN-based baseline and fair with respect to biological sex groups, but not towards individual speakers. Finally, we are the first to show that their extraordinary success on valence is based on implicit linguistic information learnt during fine-tuning of the transformer layers, which explains why they perform on-par with recent multimodal approaches that explicitly utilise textual information. Our findings collectively paint the following picture: transformer-based architectures constitute the new state-of-the-art in SER, but further advances are needed to mitigate remaining robustness and individual speaker issues. To make our findings reproducible, we release the best performing model to the community.

Results

TaskDatasetMetricValueModel
Emotion RecognitionMSP-PodcastConcordance correlation coefficient (CCC)0.638w2v2-L-robust-12
Emotion RecognitionMSP-Podcast (Valence)CCC0.638w2v2-L-robust-12
Emotion RecognitionMSP-Podcast (Dominance)CCC0.655w2v2-L-robust-12
Emotion RecognitionMSP-Podcast (Activation)CCC0.745w2v2-L-robust-12
Speech Emotion RecognitionMSP-Podcast (Valence)CCC0.638w2v2-L-robust-12
Speech Emotion RecognitionMSP-Podcast (Dominance)CCC0.655w2v2-L-robust-12
Speech Emotion RecognitionMSP-Podcast (Activation)CCC0.745w2v2-L-robust-12

Related Papers

Long-Short Distance Graph Neural Networks and Improved Curriculum Learning for Emotion Recognition in Conversation2025-07-21A Reproducibility Study of Product-side Fairness in Bundle Recommendation2025-07-18FedGA: A Fair Federated Learning Framework Based on the Gini Coefficient2025-07-17Camera-based implicit mind reading by capturing higher-order semantic dynamics of human gaze within environmental context2025-07-17Looking for Fairness in Recommender Systems2025-07-16FADE: Adversarial Concept Erasure in Flow Models2025-07-16Fairness-Aware Grouping for Continuous Sensitive Variables: Application for Debiasing Face Analysis with respect to Skin Tone2025-07-15Guiding LLM Decision-Making with Fairness Reward Models2025-07-15