TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/A vector quantized masked autoencoder for speech emotion r...

A vector quantized masked autoencoder for speech emotion recognition

Samir Sadok, Simon Leglaive, Renaud Séguier

2023-04-21Self-Supervised LearningSpeech Emotion RecognitionEmotion Recognition
PaperPDFCode(official)

Abstract

Recent years have seen remarkable progress in speech emotion recognition (SER), thanks to advances in deep learning techniques. However, the limited availability of labeled data remains a significant challenge in the field. Self-supervised learning has recently emerged as a promising solution to address this challenge. In this paper, we propose the vector quantized masked autoencoder for speech (VQ-MAE-S), a self-supervised model that is fine-tuned to recognize emotions from speech signals. The VQ-MAE-S model is based on a masked autoencoder (MAE) that operates in the discrete latent space of a vector-quantized variational autoencoder. Experimental results show that the proposed VQ-MAE-S model, pre-trained on the VoxCeleb2 dataset and fine-tuned on emotional speech data, outperforms an MAE working on the raw spectrogram representation and other state-of-the-art methods in SER.

Results

TaskDatasetMetricValueModel
Emotion RecognitionEmoDB DatasetAccuracy90.2VQ-MAE-S-12 (Frame) + Query2Emo
Emotion RecognitionEmoDB DatasetF10.891VQ-MAE-S-12 (Frame) + Query2Emo
Emotion RecognitionRAVDESSAccuracy84.1VQ-MAE-S-12 (Frame) + Query2Emo
Emotion RecognitionRAVDESSF10.844VQ-MAE-S-12 (Frame) + Query2Emo
Speech Emotion RecognitionEmoDB DatasetAccuracy90.2VQ-MAE-S-12 (Frame) + Query2Emo
Speech Emotion RecognitionEmoDB DatasetF10.891VQ-MAE-S-12 (Frame) + Query2Emo
Speech Emotion RecognitionRAVDESSAccuracy84.1VQ-MAE-S-12 (Frame) + Query2Emo
Speech Emotion RecognitionRAVDESSF10.844VQ-MAE-S-12 (Frame) + Query2Emo

Related Papers

Long-Short Distance Graph Neural Networks and Improved Curriculum Learning for Emotion Recognition in Conversation2025-07-21A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Camera-based implicit mind reading by capturing higher-order semantic dynamics of human gaze within environmental context2025-07-17A Robust Incomplete Multimodal Low-Rank Adaptation Approach for Emotion Recognition2025-07-15Self-supervised Learning on Camera Trap Footage Yields a Strong Universal Face Embedder2025-07-14Dynamic Parameter Memory: Temporary LoRA-Enhanced LLM for Long-Sequence Emotion Recognition in Conversation2025-07-11Speech Quality Assessment Model Based on Mixture of Experts: System-Level Performance Enhancement and Utterance-Level Challenge Analysis2025-07-08CAST-Phys: Contactless Affective States Through Physiological signals Database2025-07-08