TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Hate-CLIPper: Multimodal Hateful Meme Classification based...

Hate-CLIPper: Multimodal Hateful Meme Classification based on Cross-modal Interaction of CLIP Features

Gokul Karthik Kumar, Karthik Nandakumar

2022-10-12Hateful Meme ClassificationMeme Classification
PaperPDFCode(official)

Abstract

Hateful memes are a growing menace on social media. While the image and its corresponding text in a meme are related, they do not necessarily convey the same meaning when viewed individually. Hence, detecting hateful memes requires careful consideration of both visual and textual information. Multimodal pre-training can be beneficial for this task because it effectively captures the relationship between the image and the text by representing them in a similar feature space. Furthermore, it is essential to model the interactions between the image and text features through intermediate fusion. Most existing methods either employ multimodal pre-training or intermediate fusion, but not both. In this work, we propose the Hate-CLIPper architecture, which explicitly models the cross-modal interactions between the image and text representations obtained using Contrastive Language-Image Pre-training (CLIP) encoders via a feature interaction matrix (FIM). A simple classifier based on the FIM representation is able to achieve state-of-the-art performance on the Hateful Memes Challenge (HMC) dataset with an AUROC of 85.8, which even surpasses the human performance of 82.65. Experiments on other meme datasets such as Propaganda Memes and TamilMemes also demonstrate the generalizability of the proposed approach. Finally, we analyze the interpretability of the FIM representation and show that cross-modal interactions can indeed facilitate the learning of meaningful concepts. The code for this work is available at https://github.com/gokulkarthik/hateclipper.

Results

TaskDatasetMetricValueModel
Meme ClassificationTamil MemesMicro-F10.59Hate-CLIPper
Meme ClassificationHateful MemesROC-AUC0.858Hate-CLIPper - Align
Meme ClassificationMultiOFFAccuracy62.4HateCLIPper
Meme ClassificationMultiOFFF154.8HateCLIPper
Meme ClassificationHarMemeAUROC91.87Hate-CLIPper
Meme ClassificationHarMemeAccuracy83.9Hate-CLIPper
Meme ClassificationHarm-PAccuracy87.6hateclipper
Meme ClassificationHarm-PF186.9hateclipper
Meme ClassificationPrideMMAccuracy75.5HateCLIPper
Meme ClassificationPrideMMF174.1HateCLIPper

Related Papers

Detecting Harmful Memes with Decoupled Understanding and Guided CoT Reasoning2025-06-10LLM-based Semantic Augmentation for Harmful Content Detection2025-04-22Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection2025-02-18Demystifying Hateful Content: Leveraging Large Multimodal Models for Hateful Meme Detection with Explainable Decisions2025-02-16Figurative-cum-Commonsense Knowledge Infusion for Multimodal Mental Health Meme Classification2025-01-25Prompt-enhanced Network for Hateful Meme Classification2024-11-12MemeCLIP: Leveraging CLIP Representations for Multimodal Meme Classification2024-09-23What Makes a Meme a Meme? Identifying Memes for Memetics-Aware Dataset Creation2024-07-16