Do Methods to Jailbreak and Defend LLMs Generalize Across Languages?
- URL: http://arxiv.org/abs/2511.00689v2
- Date: Tue, 04 Nov 2025 15:19:44 GMT
- Title: Do Methods to Jailbreak and Defend LLMs Generalize Across Languages?
- Authors: Berk Atil, Rebecca J. Passonneau, Fred Morstatter,
- Abstract summary: This paper presents the first systematic multilingual evaluation of jailbreaks and defenses across ten languages.<n>We assess two jailbreak types: logical-expression-based and adversarial-prompt-based robustness.<n>Simple defenses can be effective, but are language- and model-dependent.
- Score: 11.718639745472224
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) undergo safety alignment after training and tuning, yet recent work shows that safety can be bypassed through jailbreak attacks. While many jailbreaks and defenses exist, their cross-lingual generalization remains underexplored. This paper presents the first systematic multilingual evaluation of jailbreaks and defenses across ten languages -- spanning high-, medium-, and low-resource languages -- using six LLMs on HarmBench and AdvBench. We assess two jailbreak types: logical-expression-based and adversarial-prompt-based. For both types, attack success and defense robustness vary across languages: high-resource languages are safer under standard queries but more vulnerable to adversarial ones. Simple defenses can be effective, but are language- and model-dependent. These findings call for language-aware and cross-lingual safety benchmarks for LLMs.
Related papers
- Evaluating LLMs Robustness in Less Resourced Languages with Proxy Models [0.0]
We show how surprisingly strong attacks can be created by altering just a few characters and using a small proxy model for word importance calculation.<n>We find that these character and word-level attacks drastically alter the predictions of different LLMs.<n>We validate our attack construction methodology on Polish, a low-resource language, and find potential vulnerabilities of LLMs in this language.
arXiv Detail & Related papers (2025-06-09T11:09:39Z) - Multilingual Collaborative Defense for Large Language Models [39.28665703568305]
One notable vulnerability is the ability to bypass safeguards by translating harmful queries into rare or underrepresented languages.<n>Despite the growing concern, there has been limited research addressing the safeguarding of LLMs in multilingual scenarios.<n>We propose Multilingual Collaborative Defense (MCD), a novel learning method that optimize a continuous, soft safety prompt automatically.
arXiv Detail & Related papers (2025-05-17T04:47:16Z) - MrGuard: A Multilingual Reasoning Guardrail for Universal LLM Safety [56.77103365251923]
Large Language Models (LLMs) are susceptible to adversarial attacks such as jailbreaking.<n>This vulnerability is exacerbated in multilingual settings, where multilingual safety-aligned data is often limited.<n>We introduce a multilingual guardrail with reasoning for prompt classification.
arXiv Detail & Related papers (2025-04-21T17:15:06Z) - QueryAttack: Jailbreaking Aligned Large Language Models Using Structured Non-natural Query Language [44.27350994698781]
We propose a novel framework to examine the generalizability of safety alignment.<n>By treating LLMs as knowledge databases, we translate malicious queries in natural language into structured non-natural query language.<n>We conduct extensive experiments on mainstream LLMs, and the results show that QueryAttack can achieve high attack success rates.
arXiv Detail & Related papers (2025-02-13T19:13:03Z) - Playing Language Game with LLMs Leads to Jailbreaking [18.63358696510664]
We introduce two novel jailbreak methods based on mismatched language games and custom language games.<n>We demonstrate the effectiveness of our methods, achieving success rates of 93% on GPT-4o, 89% on GPT-4o-mini and 83% on Claude-3.5-Sonnet.
arXiv Detail & Related papers (2024-11-16T13:07:13Z) - Benchmarking LLM Guardrails in Handling Multilingual Toxicity [57.296161186129545]
We introduce a comprehensive multilingual test suite, spanning seven datasets and over ten languages, to benchmark the performance of state-of-the-art guardrails.
We investigate the resilience of guardrails against recent jailbreaking techniques, and assess the impact of in-context safety policies and language resource availability on guardrails' performance.
Our findings show that existing guardrails are still ineffective at handling multilingual toxicity and lack robustness against jailbreaking prompts.
arXiv Detail & Related papers (2024-10-29T15:51:24Z) - EnJa: Ensemble Jailbreak on Large Language Models [69.13666224876408]
Large Language Models (LLMs) are increasingly being deployed in safety-critical applications.
LLMs can still be jailbroken by carefully crafted malicious prompts, producing content that violates policy regulations.
We propose a novel EnJa attack to hide harmful instructions using prompt-level jailbreak, boost the attack success rate using a gradient-based attack, and connect the two types of jailbreak attacks via a template-based connector.
arXiv Detail & Related papers (2024-08-07T07:46:08Z) - Weak-to-Strong Jailbreaking on Large Language Models [92.52448762164926]
Large language models (LLMs) are vulnerable to jailbreak attacks.<n>Existing jailbreaking methods are computationally costly.<n>We propose the weak-to-strong jailbreaking attack.
arXiv Detail & Related papers (2024-01-30T18:48:37Z) - A Wolf in Sheep's Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily [51.63085197162279]
Large Language Models (LLMs) are designed to provide useful and safe responses.
adversarial prompts known as 'jailbreaks' can circumvent safeguards.
We propose ReNeLLM, an automatic framework that leverages LLMs themselves to generate effective jailbreak prompts.
arXiv Detail & Related papers (2023-11-14T16:02:16Z) - All Languages Matter: On the Multilingual Safety of Large Language Models [96.47607891042523]
We build the first multilingual safety benchmark for large language models (LLMs)
XSafety covers 14 kinds of commonly used safety issues across 10 languages that span several language families.
We propose several simple and effective prompting methods to improve the multilingual safety of ChatGPT.
arXiv Detail & Related papers (2023-10-02T05:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.