TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Natural Language Processing/Abuse Detection

Abuse Detection

23 benchmarks73 papers

Abuse detection is the task of identifying abusive behaviors, such as hate speech, offensive language, sexism and racism, in utterances from social media platforms (Source: https://arxiv.org/abs/1802.00385).

Benchmarks

Abuse Detection on Ethos Binary

F1-scoreClassification AccuracyPrecision

Abuse Detection on HateXplain

AUROCAccuracyMacro F1Macro-F1

Abuse Detection on HopeEDI

Weighted Average F1-score

Abuse Detection on Ethos MultiLabel

Hamming Loss

Abuse Detection on Waseem et al., 2018

AAAF1 (micro)

Abuse Detection on AbusEval

Macro F1

Abuse Detection on Automatic Misogynistic Identification

Accuracy

Abuse Detection on HatEval

Macro F1

Abuse Detection on HateMM

TEST F1 (macro)

Abuse Detection on OffensEval 2019

Macro F1

Abuse Detection on ToLD-Br

F1-score

Abuse Detection on DKhate

F1

Abuse Detection on Hostility Detection Dataset in Hindi

F1 score

Abuse Detection on KanHope

F1-score (Weighted)

Abuse Detection on OLID

Macro F1

Abuse Detection on SHAJ

F1

Abuse Detection on bajer_danish_misogyny

F1