TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/HateXplain: A Benchmark Dataset for Explainable Hate Speec...

HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection

Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, Animesh Mukherjee

2020-12-18Text ClassificationHate Speech Detection
PaperPDFCodeCodeCodeCode(official)CodeCode

Abstract

Hate speech is a challenging issue plaguing the online social media. While better models for hate speech detection are continuously being developed, there is little research on the bias and interpretability aspects of hate speech. In this paper, we introduce HateXplain, the first benchmark hate speech dataset covering multiple aspects of the issue. Each post in our dataset is annotated from three different perspectives: the basic, commonly used 3-class classification (i.e., hate, offensive or normal), the target community (i.e., the community that has been the victim of hate speech/offensive speech in the post), and the rationales, i.e., the portions of the post on which their labelling decision (as hate, offensive or normal) is based. We utilize existing state-of-the-art models and observe that even models that perform very well in classification do not score high on explainability metrics like model plausibility and faithfulness. We also observe that models, which utilize the human rationales for training, perform better in reducing unintended bias towards target communities. We have made our code and dataset public at https://github.com/punyajoy/HateXplain

Results

TaskDatasetMetricValueModel
Abuse DetectionHateXplainAUROC0.851BERT-HateXplain [Attn]
Abuse DetectionHateXplainAccuracy0.698BERT-HateXplain [Attn]
Abuse DetectionHateXplainMacro F10.687BERT-HateXplain [Attn]
Abuse DetectionHateXplainAUROC0.851BERT-HateXplain [LIME]
Abuse DetectionHateXplainMacro F10.687BERT-HateXplain [LIME]
Abuse DetectionHateXplainAUROC0.843BERT [Attn]
Abuse DetectionHateXplainAccuracy0.69BERT [Attn]
Abuse DetectionHateXplainMacro F10.674BERT [Attn]
Abuse DetectionHateXplainAUROC0.805BiRNN-HateXplain [Attn]
Abuse DetectionHateXplainMacro F10.629BiRNN-HateXplain [Attn]
Abuse DetectionHateXplainAUROC0.795BiRNN-Attn [Attn]
Abuse DetectionHateXplainAccuracy0.621BiRNN-Attn [Attn]
Abuse DetectionHateXplainAUROC0.793CNN-GRU [LIME]
Abuse DetectionHateXplainAccuracy0.629CNN-GRU [LIME]
Abuse DetectionHateXplainMacro F10.614CNN-GRU [LIME]
Abuse DetectionHateXplainAUROC0.767BiRNN [LIME]
Abuse DetectionHateXplainAccuracy0.595BiRNN [LIME]
Abuse DetectionHateXplainMacro F10.575BiRNN [LIME]
Hate Speech DetectionHateXplainAUROC0.851BERT-HateXplain [Attn]
Hate Speech DetectionHateXplainAccuracy0.698BERT-HateXplain [Attn]
Hate Speech DetectionHateXplainMacro F10.687BERT-HateXplain [Attn]
Hate Speech DetectionHateXplainAUROC0.851BERT-HateXplain [LIME]
Hate Speech DetectionHateXplainMacro F10.687BERT-HateXplain [LIME]
Hate Speech DetectionHateXplainAUROC0.843BERT [Attn]
Hate Speech DetectionHateXplainAccuracy0.69BERT [Attn]
Hate Speech DetectionHateXplainMacro F10.674BERT [Attn]
Hate Speech DetectionHateXplainAUROC0.805BiRNN-HateXplain [Attn]
Hate Speech DetectionHateXplainMacro F10.629BiRNN-HateXplain [Attn]
Hate Speech DetectionHateXplainAUROC0.795BiRNN-Attn [Attn]
Hate Speech DetectionHateXplainAccuracy0.621BiRNN-Attn [Attn]
Hate Speech DetectionHateXplainAUROC0.793CNN-GRU [LIME]
Hate Speech DetectionHateXplainAccuracy0.629CNN-GRU [LIME]
Hate Speech DetectionHateXplainMacro F10.614CNN-GRU [LIME]
Hate Speech DetectionHateXplainAUROC0.767BiRNN [LIME]
Hate Speech DetectionHateXplainAccuracy0.595BiRNN [LIME]
Hate Speech DetectionHateXplainMacro F10.575BiRNN [LIME]

Related Papers

Making Language Model a Hierarchical Classifier and Generator2025-07-17Fine-Grained Chinese Hate Speech Understanding: Span-Level Resources, Coded Term Lexicon, and Enhanced Detection Frameworks2025-07-15GNN-CNN: An Efficient Hybrid Model of Convolutional and Graph Neural Networks for Text Representation2025-07-10The Trilemma of Truth in Large Language Models2025-06-30Robustness of Misinformation Classification Systems to Adversarial Examples Through BeamAttack2025-06-30Perspectives in Play: A Multi-Perspective Approach for More Inclusive NLP Systems2025-06-25Can Generated Images Serve as a Viable Modality for Text-Centric Multimodal Learning?2025-06-21SHREC and PHEONA: Using Large Language Models to Advance Next-Generation Computational Phenotyping2025-06-19