TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MIntRec: A New Dataset for Multimodal Intent Recognition

MIntRec: A New Dataset for Multimodal Intent Recognition

Hanlei Zhang, Hua Xu, Xin Wang, Qianrui Zhou, Shaojie Zhao, Jiayan Teng

2022-09-09Multimodal Intent RecognitionIntent Recognition
PaperPDFCode(official)

Abstract

Multimodal intent recognition is a significant task for understanding human language in real-world multimodal scenes. Most existing intent recognition methods have limitations in leveraging the multimodal information due to the restrictions of the benchmark datasets with only text information. This paper introduces a novel dataset for multimodal intent recognition (MIntRec) to address this issue. It formulates coarse-grained and fine-grained intent taxonomies based on the data collected from the TV series Superstore. The dataset consists of 2,224 high-quality samples with text, video, and audio modalities and has multimodal annotations among twenty intent categories. Furthermore, we provide annotated bounding boxes of speakers in each video segment and achieve an automatic process for speaker annotation. MIntRec is helpful for researchers to mine relationships between different modalities to enhance the capability of intent recognition. We extract features from each modality and model cross-modal interactions by adapting three powerful multimodal fusion methods to build baselines. Extensive experiments show that employing the non-verbal modalities achieves substantial improvements compared with the text-only modality, demonstrating the effectiveness of using multimodal information for intent recognition. The gap between the best-performing methods and humans indicates the challenge and importance of this task for the community. The full dataset and codes are available for use at https://github.com/thuiar/MIntRec.

Results

TaskDatasetMetricValueModel
Reading ComprehensionMIntRecAccuracy (20 classes)85.51Human
Reading ComprehensionMIntRecAccuracy (Binary)94.72Human
Reading ComprehensionMIntRecAccuracy (20 classes)72.65MAG-BERT (Text + Audio + Video)
Reading ComprehensionMIntRecAccuracy (Binary)89.24MAG-BERT (Text + Audio + Video)
Reading ComprehensionMIntRecAccuracy (20 classes)72.52MulT (Text + Audio + Video)
Reading ComprehensionMIntRecAccuracy (Binary)89.19MulT (Text + Audio + Video)
Reading ComprehensionMIntRecAccuracy (20 classes)72.29MISA (Text + Audio + Video)
Reading ComprehensionMIntRecAccuracy (Binary)89.21MISA (Text + Audio + Video)
Intent RecognitionMIntRecAccuracy (20 classes)85.51Human
Intent RecognitionMIntRecAccuracy (Binary)94.72Human
Intent RecognitionMIntRecAccuracy (20 classes)72.65MAG-BERT (Text + Audio + Video)
Intent RecognitionMIntRecAccuracy (Binary)89.24MAG-BERT (Text + Audio + Video)
Intent RecognitionMIntRecAccuracy (20 classes)72.52MulT (Text + Audio + Video)
Intent RecognitionMIntRecAccuracy (Binary)89.19MulT (Text + Audio + Video)
Intent RecognitionMIntRecAccuracy (20 classes)72.29MISA (Text + Audio + Video)
Intent RecognitionMIntRecAccuracy (Binary)89.21MISA (Text + Audio + Video)

Related Papers

ADMC: Attention-based Diffusion Model for Missing Modalities Feature Completion2025-07-08WDMIR: Wavelet-Driven Multimodal Intent Recognition2025-05-27From Intent Discovery to Recognition with Topic Modeling and Synthetic Data2025-05-16Ask, Fail, Repeat: Meeseeks, an Iterative Feedback Benchmark for LLMs' Multi-turn Instruction-Following Ability2025-04-30A-MESS: Anchor based Multimodal Embedding with Semantic Synchronization for Multimodal Intent Recognition2025-03-25TinySQL: A Progressive Text-to-SQL Dataset for Mechanistic Interpretability Research2025-03-17Understanding and Enhancing the Transferability of Jailbreaking Attacks2025-02-05EICopilot: Search and Explore Enterprise Information over Large-scale Knowledge Graphs with LLM-driven Agents2025-01-23