Low-Resource Languages Jailbreak GPT-4
- URL: http://arxiv.org/abs/2310.02446v2
- Date: Sat, 27 Jan 2024 22:54:52 GMT
- Title: Low-Resource Languages Jailbreak GPT-4
- Authors: Zheng-Xin Yong, Cristina Menghini and Stephen H. Bach
- Abstract summary: Our work exposes the inherent cross-lingual vulnerability of AI safety training and red-teaming of large language models (LLMs)
On the AdvBenchmark, GPT-4 engages with the unsafe translated inputs and provides actionable items that can get the users towards their harmful goals 79% of the time.
Other high-/mid-resource languages have significantly lower attack success rate, which suggests that the cross-lingual vulnerability mainly applies to low-resource languages.
- Score: 19.97929171158234
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI safety training and red-teaming of large language models (LLMs) are
measures to mitigate the generation of unsafe content. Our work exposes the
inherent cross-lingual vulnerability of these safety mechanisms, resulting from
the linguistic inequality of safety training data, by successfully
circumventing GPT-4's safeguard through translating unsafe English inputs into
low-resource languages. On the AdvBenchmark, GPT-4 engages with the unsafe
translated inputs and provides actionable items that can get the users towards
their harmful goals 79% of the time, which is on par with or even surpassing
state-of-the-art jailbreaking attacks. Other high-/mid-resource languages have
significantly lower attack success rate, which suggests that the cross-lingual
vulnerability mainly applies to low-resource languages. Previously, limited
training on low-resource languages primarily affects speakers of those
languages, causing technological disparities. However, our work highlights a
crucial shift: this deficiency now poses a risk to all LLMs users. Publicly
available translation APIs enable anyone to exploit LLMs' safety
vulnerabilities. Therefore, our work calls for a more holistic red-teaming
efforts to develop robust multilingual safeguards with wide language coverage.
Related papers
- Transferring Troubles: Cross-Lingual Transferability of Backdoor Attacks in LLMs with Instruction Tuning [63.481446315733145]
Our research focuses on cross-lingual backdoor attacks against multilingual models.
We investigate how poisoning the instruction-tuning data in one or two languages can affect the outputs in languages whose instruction-tuning data was not poisoned.
Our method exhibits remarkable efficacy in models like mT5, BLOOM, and GPT-3.5-turbo, with high attack success rates, surpassing 95% in several languages.
arXiv Detail & Related papers (2024-04-30T14:43:57Z) - Backdoor Attack on Multilingual Machine Translation [53.28390057407576]
multilingual machine translation (MNMT) systems have security vulnerabilities.
An attacker injects poisoned data into a low-resource language pair to cause malicious translations in other languages.
This type of attack is of particular concern, given the larger attack surface of languages inherent to low-resource settings.
arXiv Detail & Related papers (2024-04-03T01:32:31Z) - CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion [117.178835165855]
This paper introduces CodeAttack, a framework that transforms natural language inputs into code inputs.
Our studies reveal a new and universal safety vulnerability of these models against code input.
We find that a larger distribution gap between CodeAttack and natural language leads to weaker safety generalization.
arXiv Detail & Related papers (2024-03-12T17:55:38Z) - A Cross-Language Investigation into Jailbreak Attacks in Large Language
Models [14.226415550366504]
A particularly underexplored area is the Multilingual Jailbreak attack.
There is a lack of comprehensive empirical studies addressing this specific threat.
This study provides valuable insights into understanding and mitigating Multilingual Jailbreak attacks.
arXiv Detail & Related papers (2024-01-30T06:04:04Z) - Text Embedding Inversion Security for Multilingual Language Models [2.790855523145802]
Research shows that text can be reconstructed from embeddings, even without knowledge of the underlying model.
This study is the first to investigate multilingual inversion attacks, shedding light on the differences in attacks and defenses across monolingual and multilingual settings.
arXiv Detail & Related papers (2024-01-22T18:34:42Z) - Multilingual Jailbreak Challenges in Large Language Models [96.74878032417054]
In this study, we reveal the presence of multilingual jailbreak challenges within large language models (LLMs)
We consider two potential risky scenarios: unintentional and intentional.
We propose a novel textscSelf-Defense framework that automatically generates multilingual training data for safety fine-tuning.
arXiv Detail & Related papers (2023-10-10T09:44:06Z) - All Languages Matter: On the Multilingual Safety of Large Language Models [96.47607891042523]
We build the first multilingual safety benchmark for large language models (LLMs)
XSafety covers 14 kinds of commonly used safety issues across 10 languages that span several language families.
We propose several simple and effective prompting methods to improve the multilingual safety of ChatGPT.
arXiv Detail & Related papers (2023-10-02T05:23:34Z) - Democratizing LLMs for Low-Resource Languages by Leveraging their English Dominant Abilities with Linguistically-Diverse Prompts [75.33019401706188]
Large language models (LLMs) are known to effectively perform tasks by simply observing few exemplars.
We propose to assemble synthetic exemplars from a diverse set of high-resource languages to prompt the LLMs to translate from any language into English.
Our unsupervised prompting method performs on par with supervised few-shot learning in LLMs of different sizes for translations between English and 13 Indic and 21 African low-resource languages.
arXiv Detail & Related papers (2023-06-20T08:27:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.