Hate Speech According to the Law: An Analysis for Effective Detection
- URL: http://arxiv.org/abs/2412.06144v1
- Date: Mon, 09 Dec 2024 01:58:10 GMT
- Title: Hate Speech According to the Law: An Analysis for Effective Detection
- Authors: Katerina Korre, John Pavlopoulos, Paolo Gajo, Alberto Barrón-Cedeño,
- Abstract summary: This study highlights the importance of amplifying research on prosecutable hate speech.
It provides insights into effective strategies for combating hate speech within the parameters of legal frameworks.
- Score: 6.802357986439992
- License:
- Abstract: The issue of hate speech extends beyond the confines of the online realm. It is a problem with real-life repercussions, prompting most nations to formulate legal frameworks that classify hate speech as a punishable offence. These legal frameworks differ from one country to another, contributing to the big chaos that online platforms have to face when addressing reported instances of hate speech. With the definitions of hate speech falling short in introducing a robust framework, we turn our gaze onto hate speech laws. We consult the opinion of legal experts on a hate speech dataset and we experiment by employing various approaches such as pretrained models both on hate speech and legal data, as well as exploiting two large language models (Qwen2-7B-Instruct and Meta-Llama-3-70B). Due to the time-consuming nature of data acquisition for prosecutable hate speech, we use pseudo-labeling to improve our pretrained models. This study highlights the importance of amplifying research on prosecutable hate speech and provides insights into effective strategies for combating hate speech within the parameters of legal frameworks. Our findings show that legal knowledge in the form of annotations can be useful when classifying prosecutable hate speech, yet more focus should be paid on the differences between the laws.
Related papers
- OWLS: Scaling Laws for Multilingual Speech Recognition and Translation Models [55.63479003621053]
We introduce OWLS, an open-access suite of multilingual speech recognition and translation models.
We use OWLS to derive neural scaling laws, showing how final performance can be reliably predicted when scaling.
We show how OWLS can be used to power new research directions by discovering emergent abilities in large-scale speech models.
arXiv Detail & Related papers (2025-02-14T18:51:40Z) - Dealing with Annotator Disagreement in Hate Speech Classification [0.0]
This paper examines strategies for addressing annotator disagreement, an issue that has been largely overlooked.
We evaluate different approaches to deal with annotator disagreement regarding hate speech classification in Turkish tweets, based on a fine-tuned BERT model.
Our work highlights the importance of the problem and provides state-of-art benchmark results for detection and understanding of hate speech in online discourse.
arXiv Detail & Related papers (2025-02-12T10:19:50Z) - A Hate Speech Moderated Chat Application: Use Case for GDPR and DSA Compliance [0.0]
This research presents a novel application capable of implementing legal and ethical reasoning into the content moderation process.
Two use cases fundamental to online communication are presented and implemented using technologies such as GPT-3.5, Solid Pods, and the rule language Prova.
The work proposes a novel approach to reason within different legal and ethical definitions of hate speech and plan the fitting counter hate speech.
arXiv Detail & Related papers (2024-10-10T08:28:38Z) - Demarked: A Strategy for Enhanced Abusive Speech Moderation through Counterspeech, Detoxification, and Message Management [71.99446449877038]
We propose a more comprehensive approach called Demarcation scoring abusive speech based on four aspect -- (i) severity scale; (ii) presence of a target; (iii) context scale; (iv) legal scale.
Our work aims to inform future strategies for effectively addressing abusive speech online.
arXiv Detail & Related papers (2024-06-27T21:45:33Z) - An Investigation of Large Language Models for Real-World Hate Speech
Detection [46.15140831710683]
A major limitation of existing methods is that hate speech detection is a highly contextual problem.
Recently, large language models (LLMs) have demonstrated state-of-the-art performance in several natural language tasks.
Our study reveals that a meticulously crafted reasoning prompt can effectively capture the context of hate speech.
arXiv Detail & Related papers (2024-01-07T00:39:33Z) - Hate Speech Detection via Dual Contrastive Learning [25.878271501274245]
We propose a novel dual contrastive learning framework for hate speech detection.
Our framework jointly optimize the self-supervised and the supervised contrastive learning loss for capturing span-level information.
We conduct experiments on two publicly available English datasets, and experimental results show that the proposed model outperforms the state-of-the-art models.
arXiv Detail & Related papers (2023-07-10T13:23:36Z) - Towards Legally Enforceable Hate Speech Detection for Public Forums [29.225955299645978]
This research introduces a new perspective and task for enforceable hate speech detection.
We use a dataset annotated on violations of eleven possible definitions by legal experts.
Given the challenge of identifying clear, legally enforceable instances of hate speech, we augment the dataset with expert-generated samples and an automatically mined challenge set.
arXiv Detail & Related papers (2023-05-23T04:34:41Z) - CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - Pile of Law: Learning Responsible Data Filtering from the Law and a
256GB Open-Source Legal Dataset [46.156169284961045]
We offer an approach to filtering grounded in law, which has directly addressed the tradeoffs in filtering material.
First, we gather and make available the Pile of Law, a 256GB dataset of open-source English-language legal and administrative data.
Second, we distill the legal norms that governments have developed to constrain the inclusion of toxic or private content into actionable lessons.
Third, we show how the Pile of Law offers researchers the opportunity to learn such filtering rules directly from the data.
arXiv Detail & Related papers (2022-07-01T06:25:15Z) - Addressing the Challenges of Cross-Lingual Hate Speech Detection [115.1352779982269]
In this paper we focus on cross-lingual transfer learning to support hate speech detection in low-resource languages.
We leverage cross-lingual word embeddings to train our neural network systems on the source language and apply it to the target language.
We investigate the issue of label imbalance of hate speech datasets, since the high ratio of non-hate examples compared to hate examples often leads to low model performance.
arXiv Detail & Related papers (2022-01-15T20:48:14Z) - Countering Online Hate Speech: An NLP Perspective [34.19875714256597]
Online toxicity - an umbrella term for online hateful behavior - manifests itself in forms such as online hate speech.
The rising mass communication through social media further exacerbates the harmful consequences of online hate speech.
This paper presents a holistic conceptual framework on hate-speech NLP countering methods along with a thorough survey on the current progress of NLP for countering online hate speech.
arXiv Detail & Related papers (2021-09-07T08:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.