HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
- URL: http://arxiv.org/abs/2012.10289v1
- Date: Fri, 18 Dec 2020 15:12:14 GMT
- Title: HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
- Authors: Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan
Goyal, and Animesh Mukherjee
- Abstract summary: We introduce HateXplain, the first benchmark hate speech dataset covering multiple aspects of the issue.
Each post in our dataset is annotated from three different perspectives.
We observe that models, which utilize the human rationales for training, perform better in reducing unintended bias towards target communities.
- Score: 27.05719607624675
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hate speech is a challenging issue plaguing the online social media. While
better models for hate speech detection are continuously being developed, there
is little research on the bias and interpretability aspects of hate speech. In
this paper, we introduce HateXplain, the first benchmark hate speech dataset
covering multiple aspects of the issue. Each post in our dataset is annotated
from three different perspectives: the basic, commonly used 3-class
classification (i.e., hate, offensive or normal), the target community (i.e.,
the community that has been the victim of hate speech/offensive speech in the
post), and the rationales, i.e., the portions of the post on which their
labelling decision (as hate, offensive or normal) is based. We utilize existing
state-of-the-art models and observe that even models that perform very well in
classification do not score high on explainability metrics like model
plausibility and faithfulness. We also observe that models, which utilize the
human rationales for training, perform better in reducing unintended bias
towards target communities. We have made our code and dataset public at
https://github.com/punyajoy/HateXplain
Related papers
- Into the LAIONs Den: Investigating Hate in Multimodal Datasets [67.21783778038645]
This paper investigates the effect of scaling datasets on hateful content through a comparative audit of two datasets: LAION-400M and LAION-2B.
We found that hate content increased by nearly 12% with dataset scale, measured both qualitatively and quantitatively.
We also found that filtering dataset contents based on Not Safe For Work (NSFW) values calculated based on images alone does not exclude all the harmful content in alt-text.
arXiv Detail & Related papers (2023-11-06T19:00:05Z) - HARE: Explainable Hate Speech Detection with Step-by-Step Reasoning [29.519687405350304]
We introduce a hate speech detection framework, HARE, which harnesses the reasoning capabilities of large language models (LLMs) to fill gaps in explanations of hate speech.
Experiments on SBIC and Implicit Hate benchmarks show that our method, using model-generated data, consistently outperforms baselines.
Our method enhances the explanation quality of trained models and improves generalization to unseen datasets.
arXiv Detail & Related papers (2023-11-01T06:09:54Z) - Revisiting Hate Speech Benchmarks: From Data Curation to System
Deployment [26.504056750529124]
We present GOTHate, a large-scale code-mixed crowdsourced dataset of around 51k posts for hate speech detection from Twitter.
We benchmark it with 10 recent baselines and investigate how adding endogenous signals enhances the hate speech detection task.
Our solution HEN-mBERT is a modular, multilingual, mixture-of-experts model that enriches the linguistic subspace with latent endogenous signals.
arXiv Detail & Related papers (2023-06-01T19:36:52Z) - CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - Addressing the Challenges of Cross-Lingual Hate Speech Detection [115.1352779982269]
In this paper we focus on cross-lingual transfer learning to support hate speech detection in low-resource languages.
We leverage cross-lingual word embeddings to train our neural network systems on the source language and apply it to the target language.
We investigate the issue of label imbalance of hate speech datasets, since the high ratio of non-hate examples compared to hate examples often leads to low model performance.
arXiv Detail & Related papers (2022-01-15T20:48:14Z) - Reducing Target Group Bias in Hate Speech Detectors [56.94616390740415]
We show that text classification models trained on large publicly available datasets, may significantly under-perform on several protected groups.
We propose to perform token-level hate sense disambiguation, and utilize tokens' hate sense representations for detection.
arXiv Detail & Related papers (2021-12-07T17:49:34Z) - Unsupervised Domain Adaptation for Hate Speech Detection Using a Data
Augmentation Approach [6.497816402045099]
We propose an unsupervised domain adaptation approach to augment labeled data for hate speech detection.
We show our approach improves Area under the Precision/Recall curve by as much as 42% and recall by as much as 278%.
arXiv Detail & Related papers (2021-07-27T15:01:22Z) - An Information Retrieval Approach to Building Datasets for Hate Speech
Detection [3.587367153279349]
A common practice is to only annotate tweets containing known hate words''
A second challenge is that definitions of hate speech tend to be highly variable and subjective.
Our key insight is that the rarity and subjectivity of hate speech are akin to that of relevance in information retrieval (IR)
arXiv Detail & Related papers (2021-06-17T19:25:39Z) - Towards generalisable hate speech detection: a review on obstacles and
solutions [6.531659195805749]
This survey paper attempts to summarise how generalisable existing hate speech detection models are.
It sums up existing attempts at addressing the main obstacles, and then proposes directions of future research to improve generalisation in hate speech detection.
arXiv Detail & Related papers (2021-02-17T17:27:48Z) - Trawling for Trolling: A Dataset [56.1778095945542]
We present a dataset that models trolling as a subcategory of offensive content.
The dataset has 12,490 samples, split across 5 classes; Normal, Profanity, Trolling, Derogatory and Hate Speech.
arXiv Detail & Related papers (2020-08-02T17:23:55Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.