TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Improving Multimodal Fusion with Hierarchical Mutual Infor...

Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis

Wei Han, Hui Chen, Soujanya Poria

2021-09-01EMNLP 2021 11Sentiment AnalysisMultimodal Sentiment Analysis
PaperPDFCode(official)Code(official)

Abstract

In multimodal sentiment analysis (MSA), the performance of a model highly depends on the quality of synthesized embeddings. These embeddings are generated from the upstream process called multimodal fusion, which aims to extract and combine the input unimodal raw data to produce a richer multimodal representation. Previous work either back-propagates the task loss or manipulates the geometric property of feature spaces to produce favorable fusion results, which neglects the preservation of critical task-related information that flows from input to the fusion results. In this work, we propose a framework named MultiModal InfoMax (MMIM), which hierarchically maximizes the Mutual Information (MI) in unimodal input pairs (inter-modality) and between multimodal fusion result and unimodal input in order to maintain task-related information through multimodal fusion. The framework is jointly trained with the main task (MSA) to improve the performance of the downstream MSA task. To address the intractable issue of MI bounds, we further formulate a set of computationally simple parametric and non-parametric methods to approximate their truth value. Experimental results on the two widely used datasets demonstrate the efficacy of our approach. The implementation of this work is publicly available at https://github.com/declare-lab/Multimodal-Infomax.

Results

TaskDatasetMetricValueModel
Sentiment AnalysisCMU-MOSIAcc-284.14MMIM
Sentiment AnalysisCMU-MOSIAcc-746.65MMIM
Sentiment AnalysisCMU-MOSICorr0.8MMIM
Sentiment AnalysisCMU-MOSIF184MMIM
Sentiment AnalysisCMU-MOSIMAE0.7MMIM
Sentiment AnalysisCMU-MOSIAcc-282.54self-M
Sentiment AnalysisCMU-MOSIAcc-745.79self-M
Sentiment AnalysisCMU-MOSICorr0.795self-M
Sentiment AnalysisCMU-MOSIF182.68self-M
Sentiment AnalysisCMU-MOSIMAE0.712self-M
Sentiment AnalysisCMU-MOSIAcc-282.37MAG-BERT*
Sentiment AnalysisCMU-MOSIAcc-743.62MAG-BERT*
Sentiment AnalysisCMU-MOSICorr0.781MAG-BERT*
Sentiment AnalysisCMU-MOSIF182.5MAG-BERT*
Sentiment AnalysisCMU-MOSIMAE0.727MAG-BERT*

Related Papers

AdaptiSent: Context-Aware Adaptive Attention for Multimodal Aspect-Based Sentiment Analysis2025-07-17AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles2025-07-15DCR: Quantifying Data Contamination in LLMs Evaluation2025-07-15SentiDrop: A Multi Modal Machine Learning model for Predicting Dropout in Distance Learning2025-07-14GNN-CNN: An Efficient Hybrid Model of Convolutional and Graph Neural Networks for Text Representation2025-07-10FINN-GL: Generalized Mixed-Precision Extensions for FPGA-Accelerated LSTMs2025-06-25Unpacking Generative AI in Education: Computational Modeling of Teacher and Student Perspectives in Social Media Discourse2025-06-19Characterizing Linguistic Shifts in Croatian News via Diachronic Word Embeddings2025-06-16