Multimodal Emotion Recognition

15 benchmarks180 papers

This is a leaderboard for multimodal emotion recognition on the IEMOCAP dataset. The modality abbreviations are A: Acoustic T: Text V: Visual

Please include the modality in the bracket after the model name.

All models must use standard five emotion categories and are evaluated in standard leave-one-session-out (LOSO). See the papers for references.

Benchmarks

Multimodal Emotion Recognition on IEMOCAP-4

Multimodal Emotion Recognition on MELD

Multimodal Emotion Recognition on IEMOCAP

Multimodal Emotion Recognition on CMU-MOSEI-Sentiment

Multimodal Emotion Recognition on CMU-MOSEI-Sentiment-3

Multimodal Emotion Recognition on Expressive hands and faces dataset (EHF).

Multimodal Emotion Recognition on MELD-Sentiment