Transfer Learning for Hate Speech Detection in Social Media
- URL: http://arxiv.org/abs/1906.03829v3
- Date: Sun, 29 Oct 2023 16:22:12 GMT
- Title: Transfer Learning for Hate Speech Detection in Social Media
- Authors: Lanqin Yuan and Tianyu Wang and Gabriela Ferraro and Hanna Suominen
and Marian-Andrei Rizoiu
- Abstract summary: This paper uses a transfer learning technique to leverage two independent datasets jointly.
We build an interpretable two-dimensional visualization tool of the constructed hate speech representation -- dubbed the Map of Hate.
We show that the joint representation boosts prediction performances when only a limited amount of supervision is available.
- Score: 14.759208309842178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Today, the internet is an integral part of our daily lives, enabling people
to be more connected than ever before. However, this greater connectivity and
access to information increase exposure to harmful content such as
cyber-bullying and cyber-hatred. Models based on machine learning and natural
language offer a way to make online platforms safer by identifying hate speech
in web text autonomously. However, the main difficulty is annotating a
sufficiently large number of examples to train these models. This paper uses a
transfer learning technique to leverage two independent datasets jointly and
builds a single representation of hate speech. We build an interpretable
two-dimensional visualization tool of the constructed hate speech
representation -- dubbed the Map of Hate -- in which multiple datasets can be
projected and comparatively analyzed. The hateful content is annotated
differently across the two datasets (racist and sexist in one dataset, hateful
and offensive in another). However, the common representation successfully
projects the harmless class of both datasets into the same space and can be
used to uncover labeling errors (false positives). We also show that the joint
representation boosts prediction performances when only a limited amount of
supervision is available. These methods and insights hold the potential for
safer social media and reduce the need to expose human moderators and
annotators to distressing online messaging.
Related papers
- Towards Interpretable Hate Speech Detection using Large Language Model-extracted Rationales [15.458557611029518]
Social media platforms are a prominent arena for users to engage in interpersonal discussions and express opinions.
There arises a need to automatically identify and flag instances of hate speech.
We propose to use state-of-the-art Large Language Models (LLMs) to extract features in the form of rationales from the input text.
arXiv Detail & Related papers (2024-03-19T03:22:35Z) - MetaHate: A Dataset for Unifying Efforts on Hate Speech Detection [2.433983268807517]
Hate speech poses significant social, psychological, and occasionally physical threats to targeted individuals and communities.
Current computational linguistic approaches for tackling this phenomenon rely on labelled social media datasets for training.
We scrutinized over 60 datasets, selectively integrating those pertinent into MetaHate.
Our findings contribute to a deeper understanding of the existing datasets, paving the way for training more robust and adaptable models.
arXiv Detail & Related papers (2024-01-12T11:54:53Z) - Into the LAIONs Den: Investigating Hate in Multimodal Datasets [67.21783778038645]
This paper investigates the effect of scaling datasets on hateful content through a comparative audit of two datasets: LAION-400M and LAION-2B.
We found that hate content increased by nearly 12% with dataset scale, measured both qualitatively and quantitatively.
We also found that filtering dataset contents based on Not Safe For Work (NSFW) values calculated based on images alone does not exclude all the harmful content in alt-text.
arXiv Detail & Related papers (2023-11-06T19:00:05Z) - Understanding writing style in social media with a supervised
contrastively pre-trained transformer [57.48690310135374]
Online Social Networks serve as fertile ground for harmful behavior, ranging from hate speech to the dissemination of disinformation.
We introduce the Style Transformer for Authorship Representations (STAR), trained on a large corpus derived from public sources of 4.5 x 106 authored texts.
Using a support base of 8 documents of 512 tokens, we can discern authors from sets of up to 1616 authors with at least 80% accuracy.
arXiv Detail & Related papers (2023-10-17T09:01:17Z) - Harnessing the Power of Text-image Contrastive Models for Automatic
Detection of Online Misinformation [50.46219766161111]
We develop a self-learning model to explore the constrastive learning in the domain of misinformation identification.
Our model shows the superior performance of non-matched image-text pair detection when the training data is insufficient.
arXiv Detail & Related papers (2023-04-19T02:53:59Z) - CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - Hate Speech and Offensive Language Detection using an Emotion-aware
Shared Encoder [1.8734449181723825]
Existing works on hate speech and offensive language detection produce promising results based on pre-trained transformer models.
This paper addresses a multi-task joint learning approach which combines external emotional features extracted from another corpora.
Our findings demonstrate that emotional knowledge helps to more reliably identify hate speech and offensive language across datasets.
arXiv Detail & Related papers (2023-02-17T09:31:06Z) - Anti-Asian Hate Speech Detection via Data Augmented Semantic Relation
Inference [4.885207279350052]
We propose a novel approach to leverage sentiment hashtags to enhance hate speech detection in a natural language inference framework.
We design a novel framework SRIC that simultaneously performs two tasks: (1) semantic relation inference between online posts and sentiment hashtags, and (2) sentiment classification on these posts.
arXiv Detail & Related papers (2022-04-14T15:03:35Z) - Addressing the Challenges of Cross-Lingual Hate Speech Detection [115.1352779982269]
In this paper we focus on cross-lingual transfer learning to support hate speech detection in low-resource languages.
We leverage cross-lingual word embeddings to train our neural network systems on the source language and apply it to the target language.
We investigate the issue of label imbalance of hate speech datasets, since the high ratio of non-hate examples compared to hate examples often leads to low model performance.
arXiv Detail & Related papers (2022-01-15T20:48:14Z) - Trawling for Trolling: A Dataset [56.1778095945542]
We present a dataset that models trolling as a subcategory of offensive content.
The dataset has 12,490 samples, split across 5 classes; Normal, Profanity, Trolling, Derogatory and Hate Speech.
arXiv Detail & Related papers (2020-08-02T17:23:55Z) - Investigating Deep Learning Approaches for Hate Speech Detection in
Social Media [20.974715256618754]
The misuse of freedom of expression has led to the increase of various cyber crimes and anti-social activities.
Hate speech is one such issue that needs to be addressed very seriously as otherwise, this could pose threats to the integrity of the social fabrics.
In this paper, we proposed deep learning approaches utilizing various embeddings for detecting various types of hate speeches in social media.
arXiv Detail & Related papers (2020-05-29T17:28:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.