Hate speech detection using static BERT embeddings
- URL: http://arxiv.org/abs/2106.15537v1
- Date: Tue, 29 Jun 2021 16:17:10 GMT
- Title: Hate speech detection using static BERT embeddings
- Authors: Gaurav Rajput, Narinder Singh punn, Sanjay Kumar Sonbhadra, Sonali
Agarwal
- Abstract summary: Hate speech is emerging as a major concern, where it expresses abusive speech that targets specific group characteristics.
In this paper, we analyze the performance of hate speech detection by replacing or integrating the word embeddings.
In comparison to fine-tuned BERT, one metric that significantly improved is specificity.
- Score: 0.9176056742068814
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With increasing popularity of social media platforms hate speech is emerging
as a major concern, where it expresses abusive speech that targets specific
group characteristics, such as gender, religion or ethnicity to spread
violence. Earlier people use to verbally deliver hate speeches but now with the
expansion of technology, some people are deliberately using social media
platforms to spread hate by posting, sharing, commenting, etc. Whether it is
Christchurch mosque shootings or hate crimes against Asians in west, it has
been observed that the convicts are very much influenced from hate text present
online. Even though AI systems are in place to flag such text but one of the
key challenges is to reduce the false positive rate (marking non hate as hate),
so that these systems can detect hate speech without undermining the freedom of
expression. In this paper, we use ETHOS hate speech detection dataset and
analyze the performance of hate speech detection classifier by replacing or
integrating the word embeddings (fastText (FT), GloVe (GV) or FT + GV) with
static BERT embeddings (BE). With the extensive experimental trails it is
observed that the neural network performed better with static BE compared to
using FT, GV or FT + GV as word embeddings. In comparison to fine-tuned BERT,
one metric that significantly improved is specificity.
Related papers
- Silent Signals, Loud Impact: LLMs for Word-Sense Disambiguation of Coded Dog Whistles [47.61526125774749]
A dog whistle is a form of coded communication that carries a secondary meaning to specific audiences and is often weaponized for racial and socioeconomic discrimination.
We present an approach for word-sense disambiguation of dog whistles from standard speech using Large Language Models (LLMs)
We leverage this technique to create a dataset of 16,550 high-confidence coded examples of dog whistles used in formal and informal communication.
arXiv Detail & Related papers (2024-06-10T23:09:19Z) - Exploiting Hatred by Targets for Hate Speech Detection on Vietnamese Social Media Texts [0.0]
We first introduce the ViTHSD - a targeted hate speech detection dataset for Vietnamese Social Media Texts.
The dataset contains 10K comments, each comment is labeled to specific targets with three levels: clean, offensive, and hate.
The inter-annotator agreement obtained from the dataset is 0.45 by Cohen's Kappa index, which is indicated as a moderate level.
arXiv Detail & Related papers (2024-04-30T04:16:55Z) - An Investigation of Large Language Models for Real-World Hate Speech
Detection [46.15140831710683]
A major limitation of existing methods is that hate speech detection is a highly contextual problem.
Recently, large language models (LLMs) have demonstrated state-of-the-art performance in several natural language tasks.
Our study reveals that a meticulously crafted reasoning prompt can effectively capture the context of hate speech.
arXiv Detail & Related papers (2024-01-07T00:39:33Z) - Hate Speech Targets Detection in Parler using BERT [0.0]
We present a pipeline for detecting hate speech and its targets and use it for creating Parler hate targets' distribution.
The pipeline consists of two models; one for hate speech detection and the second for target classification.
arXiv Detail & Related papers (2023-04-03T17:49:04Z) - CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - Hate Speech Classification Using SVM and Naive BAYES [0.0]
Many countries have developed laws to avoid online hate speech.
But as online content continues to grow, so does the spread of hate speech.
It is important to automatically process the online user contents to detect and remove hate speech.
arXiv Detail & Related papers (2022-03-21T17:15:38Z) - Addressing the Challenges of Cross-Lingual Hate Speech Detection [115.1352779982269]
In this paper we focus on cross-lingual transfer learning to support hate speech detection in low-resource languages.
We leverage cross-lingual word embeddings to train our neural network systems on the source language and apply it to the target language.
We investigate the issue of label imbalance of hate speech datasets, since the high ratio of non-hate examples compared to hate examples often leads to low model performance.
arXiv Detail & Related papers (2022-01-15T20:48:14Z) - Leveraging Transformers for Hate Speech Detection in Conversational
Code-Mixed Tweets [36.29939722039909]
This paper describes the system proposed by team MIDAS-IIITD for HASOC 2021 subtask 2.
It is one of the first shared tasks focusing on detecting hate speech from Hindi-English code-mixed conversations on Twitter.
Our best performing system, a hard voting ensemble of Indic-BERT, XLM-RoBERTa, and Multilingual BERT, achieved a macro F1 score of 0.7253.
arXiv Detail & Related papers (2021-12-18T19:27:33Z) - Detection of Hate Speech using BERT and Hate Speech Word Embedding with
Deep Model [0.5801044612920815]
This paper investigates the feasibility of leveraging domain-specific word embedding in Bidirectional LSTM based deep model to automatically detect/classify hate speech.
The experiments showed that domainspecific word embedding with the Bidirectional LSTM based deep model achieved a 93% f1-score while BERT achieved up to 96% f1-score on a combined balanced dataset from available hate speech datasets.
arXiv Detail & Related papers (2021-11-02T11:42:54Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z) - Demoting Racial Bias in Hate Speech Detection [39.376886409461775]
In current hate speech datasets, there exists a correlation between annotators' perceptions of toxicity and signals of African American English (AAE)
In this paper, we use adversarial training to mitigate this bias, introducing a hate speech classifier that learns to detect toxic sentences while demoting confounds corresponding to AAE texts.
Experimental results on a hate speech dataset and an AAE dataset suggest that our method is able to substantially reduce the false positive rate for AAE text while only minimally affecting the performance of hate speech classification.
arXiv Detail & Related papers (2020-05-25T17:43:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.