Can Prompting LLMs Unlock Hate Speech Detection across Languages? A Zero-shot and Few-shot Study
- URL: http://arxiv.org/abs/2505.06149v3
- Date: Sat, 24 May 2025 11:35:05 GMT
- Title: Can Prompting LLMs Unlock Hate Speech Detection across Languages? A Zero-shot and Few-shot Study
- Authors: Faeze Ghorbanpour, Daryna Dementieva, Alexander Fraser,
- Abstract summary: This work evaluates LLM prompting-based detection across eight non-English languages.<n>We show that while zero-shot and few-shot prompting lag behind fine-tuned encoder models on most of the real-world evaluation sets, they achieve better generalization on functional tests for hate speech detection.
- Score: 59.30098850050971
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Despite growing interest in automated hate speech detection, most existing approaches overlook the linguistic diversity of online content. Multilingual instruction-tuned large language models such as LLaMA, Aya, Qwen, and BloomZ offer promising capabilities across languages, but their effectiveness in identifying hate speech through zero-shot and few-shot prompting remains underexplored. This work evaluates LLM prompting-based detection across eight non-English languages, utilizing several prompting techniques and comparing them to fine-tuned encoder models. We show that while zero-shot and few-shot prompting lag behind fine-tuned encoder models on most of the real-world evaluation sets, they achieve better generalization on functional tests for hate speech detection. Our study also reveals that prompt design plays a critical role, with each language often requiring customized prompting techniques to maximize performance.
Related papers
- Speech-IFEval: Evaluating Instruction-Following and Quantifying Catastrophic Forgetting in Speech-Aware Language Models [49.1574468325115]
We introduce Speech-IFeval, an evaluation framework designed to assess instruction-following capabilities.<n>Recent SLMs integrate speech perception with large language models (LLMs), often degrading textual capabilities due to speech-centric training.<n>Our findings show that most SLMs struggle with even basic instructions, performing far worse than text-based LLMs.
arXiv Detail & Related papers (2025-05-25T08:37:55Z) - LLMsAgainstHate @ NLU of Devanagari Script Languages 2025: Hate Speech Detection and Target Identification in Devanagari Languages via Parameter Efficient Fine-Tuning of LLMs [9.234570108225187]
We propose an Efficient Fine tuning (PEFT) based solution for hate speech detection and target identification.<n>We evaluate multiple LLMs on the Devanagari dataset provided by (Thapa et al., 2025)<n>Results demonstrate the efficacy of our approach in handling Devanagari-scripted content.
arXiv Detail & Related papers (2024-12-22T18:38:24Z) - ChatZero:Zero-shot Cross-Lingual Dialogue Generation via Pseudo-Target Language [53.8622516025736]
We propose a novel end-to-end zero-shot dialogue generation model ChatZero based on cross-lingual code-switching method.
Experiments on the multilingual DailyDialog and DSTC7-AVSD datasets demonstrate that ChatZero can achieve more than 90% of the original performance.
arXiv Detail & Related papers (2024-08-16T13:11:53Z) - Understanding and Mitigating Language Confusion in LLMs [76.96033035093204]
We evaluate 15 typologically diverse languages with existing and newly-created English and multilingual prompts.<n>We find that Llama Instruct and Mistral models exhibit high degrees of language confusion.<n>We find that language confusion can be partially mitigated via few-shot prompting, multilingual SFT and preference tuning.
arXiv Detail & Related papers (2024-06-28T17:03:51Z) - The Ups and Downs of Large Language Model Inference with Vocabulary Trimming by Language Heuristics [74.99898531299148]
This research examines vocabulary trimming (VT) inspired by restricting embedding entries to the language of interest to bolster time and memory efficiency.
We apply two languages to trim the full vocabulary - Unicode-based script filtering and corpus-based selection - to different language families and sizes.
It is found that VT reduces the memory usage of small models by nearly 50% and has an upper bound of 25% improvement in generation speed.
arXiv Detail & Related papers (2023-11-16T09:35:50Z) - Evaluating ChatGPT's Performance for Multilingual and Emoji-based Hate
Speech Detection [4.809236881780707]
Large language models like ChatGPT have recently shown a great promise in performing several tasks, including hate speech detection.
This study aims to evaluate the strengths and weaknesses of the ChatGPT model in detecting hate speech at a granular level across 11 languages.
arXiv Detail & Related papers (2023-05-22T17:36:58Z) - Interpretable Unified Language Checking [42.816372695828306]
We present an interpretable, unified, language checking (UniLC) method for both human and machine-generated language.
We find that LLMs can achieve high performance on a combination of fact-checking, stereotype detection, and hate speech detection tasks.
arXiv Detail & Related papers (2023-04-07T16:47:49Z) - Multilingual Auxiliary Tasks Training: Bridging the Gap between
Languages for Zero-Shot Transfer of Hate Speech Detection Models [3.97478982737167]
We show how hate speech detection models benefit from a cross-lingual knowledge proxy brought by auxiliary tasks fine-tuning.
We propose to train on multilingual auxiliary tasks to improve zero-shot transfer of hate speech detection models across languages.
arXiv Detail & Related papers (2022-10-24T08:26:51Z) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech
Detection Models [14.128029444990895]
We introduce HateCheck (MHC), a suite of functional tests for multilingual hate speech detection models.
MHC covers 34 functionalities across ten languages, which is more languages than any other hate speech dataset.
We train and test a high-performing multilingual hate speech detection model, and reveal critical model weaknesses for monolingual and cross-lingual applications.
arXiv Detail & Related papers (2022-06-20T17:54:39Z) - Addressing the Challenges of Cross-Lingual Hate Speech Detection [115.1352779982269]
In this paper we focus on cross-lingual transfer learning to support hate speech detection in low-resource languages.
We leverage cross-lingual word embeddings to train our neural network systems on the source language and apply it to the target language.
We investigate the issue of label imbalance of hate speech datasets, since the high ratio of non-hate examples compared to hate examples often leads to low model performance.
arXiv Detail & Related papers (2022-01-15T20:48:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.