Ioannis Mollas, Zoe Chrysopoulou, Stamatis Karlos, Grigorios Tsoumakas
Online hate speech is a recent problem in our society that is rising at a steady pace by leveraging the vulnerabilities of the corresponding regimes that characterise most social media platforms. This phenomenon is primarily fostered by offensive comments, either during user interaction or in the form of a posted multimedia context. Nowadays, giant corporations own platforms where millions of users log in every day, and protection from exposure to similar phenomena appears to be necessary in order to comply with the corresponding legislation and maintain a high level of service quality. A robust and reliable system for detecting and preventing the uploading of relevant content will have a significant impact on our digitally interconnected society. Several aspects of our daily lives are undeniably linked to our social profiles, making us vulnerable to abusive behaviours. As a result, the lack of accurate hate speech detection mechanisms would severely degrade the overall user experience, although its erroneous operation would pose many ethical concerns. In this paper, we present 'ETHOS', a textual dataset with two variants: binary and multi-label, based on YouTube and Reddit comments validated using the Figure-Eight crowdsourcing platform. Furthermore, we present the annotation protocol used to create this dataset: an active sampling procedure for balancing our data in relation to the various aspects defined. Our key assumption is that, even gaining a small amount of labelled data from such a time-consuming process, we can guarantee hate speech occurrences in the examined material.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Abuse Detection | Ethos MultiLabel | Hamming Loss | 0.2948 | MLARAM |
| Abuse Detection | Ethos MultiLabel | Hamming Loss | 0.1606 | MLkNN |
| Abuse Detection | Ethos MultiLabel | Hamming Loss | 0.1395 | Binary Relevance |
| Abuse Detection | Ethos MultiLabel | Hamming Loss | 0.132 | Neural Classifier Chains |
| Abuse Detection | Ethos MultiLabel | Hamming Loss | 0.1097 | Neural Binary Relevance |
| Abuse Detection | Ethos Binary | Classification Accuracy | 0.7664 | BERT |
| Abuse Detection | Ethos Binary | F1-score | 0.7883 | BERT |
| Abuse Detection | Ethos Binary | Precision | 79.17 | BERT |
| Abuse Detection | Ethos Binary | Classification Accuracy | 0.7734 | BiLSTM+Attention+FT |
| Abuse Detection | Ethos Binary | F1-score | 0.768 | BiLSTM+Attention+FT |
| Abuse Detection | Ethos Binary | Precision | 77.76 | BiLSTM+Attention+FT |
| Abuse Detection | Ethos Binary | Classification Accuracy | 0.7515 | CNN+Attention+FT+GV |
| Abuse Detection | Ethos Binary | F1-score | 0.7441 | CNN+Attention+FT+GV |
| Abuse Detection | Ethos Binary | Precision | 74.92 | CNN+Attention+FT+GV |
| Abuse Detection | Ethos Binary | Classification Accuracy | 0.6643 | SVM |
| Abuse Detection | Ethos Binary | F1-score | 0.6607 | SVM |
| Abuse Detection | Ethos Binary | Precision | 66.47 | SVM |
| Abuse Detection | Ethos Binary | Classification Accuracy | 0.6504 | Random Forests |
| Abuse Detection | Ethos Binary | F1-score | 0.6441 | Random Forests |
| Abuse Detection | Ethos Binary | Precision | 64.69 | Random Forests |
| Hate Speech Detection | Ethos MultiLabel | Hamming Loss | 0.2948 | MLARAM |
| Hate Speech Detection | Ethos MultiLabel | Hamming Loss | 0.1606 | MLkNN |
| Hate Speech Detection | Ethos MultiLabel | Hamming Loss | 0.1395 | Binary Relevance |
| Hate Speech Detection | Ethos MultiLabel | Hamming Loss | 0.132 | Neural Classifier Chains |
| Hate Speech Detection | Ethos MultiLabel | Hamming Loss | 0.1097 | Neural Binary Relevance |
| Hate Speech Detection | Ethos Binary | Classification Accuracy | 0.7664 | BERT |
| Hate Speech Detection | Ethos Binary | F1-score | 0.7883 | BERT |
| Hate Speech Detection | Ethos Binary | Precision | 79.17 | BERT |
| Hate Speech Detection | Ethos Binary | Classification Accuracy | 0.7734 | BiLSTM+Attention+FT |
| Hate Speech Detection | Ethos Binary | F1-score | 0.768 | BiLSTM+Attention+FT |
| Hate Speech Detection | Ethos Binary | Precision | 77.76 | BiLSTM+Attention+FT |
| Hate Speech Detection | Ethos Binary | Classification Accuracy | 0.7515 | CNN+Attention+FT+GV |
| Hate Speech Detection | Ethos Binary | F1-score | 0.7441 | CNN+Attention+FT+GV |
| Hate Speech Detection | Ethos Binary | Precision | 74.92 | CNN+Attention+FT+GV |
| Hate Speech Detection | Ethos Binary | Classification Accuracy | 0.6643 | SVM |
| Hate Speech Detection | Ethos Binary | F1-score | 0.6607 | SVM |
| Hate Speech Detection | Ethos Binary | Precision | 66.47 | SVM |
| Hate Speech Detection | Ethos Binary | Classification Accuracy | 0.6504 | Random Forests |
| Hate Speech Detection | Ethos Binary | F1-score | 0.6441 | Random Forests |
| Hate Speech Detection | Ethos Binary | Precision | 64.69 | Random Forests |