Jailbroken: How Does LLM Safety Training Fail?
- URL: http://arxiv.org/abs/2307.02483v1
- Date: Wed, 5 Jul 2023 17:58:10 GMT
- Title: Jailbroken: How Does LLM Safety Training Fail?
- Authors: Alexander Wei and Nika Haghtalab and Jacob Steinhardt
- Abstract summary: "jailbreak" attacks on early releases of ChatGPT elicit undesired behavior.
We investigate why such attacks succeed and how they can be created.
New attacks utilizing our failure modes succeed on every prompt in a collection of unsafe requests.
- Score: 92.8748773632051
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models trained for safety and harmlessness remain susceptible
to adversarial misuse, as evidenced by the prevalence of "jailbreak" attacks on
early releases of ChatGPT that elicit undesired behavior. Going beyond
recognition of the issue, we investigate why such attacks succeed and how they
can be created. We hypothesize two failure modes of safety training: competing
objectives and mismatched generalization. Competing objectives arise when a
model's capabilities and safety goals conflict, while mismatched generalization
occurs when safety training fails to generalize to a domain for which
capabilities exist. We use these failure modes to guide jailbreak design and
then evaluate state-of-the-art models, including OpenAI's GPT-4 and Anthropic's
Claude v1.3, against both existing and newly designed attacks. We find that
vulnerabilities persist despite the extensive red-teaming and safety-training
efforts behind these models. Notably, new attacks utilizing our failure modes
succeed on every prompt in a collection of unsafe requests from the models'
red-teaming evaluation sets and outperform existing ad hoc jailbreaks. Our
analysis emphasizes the need for safety-capability parity -- that safety
mechanisms should be as sophisticated as the underlying model -- and argues
against the idea that scaling alone can resolve these safety failure modes.
Related papers
- How Jailbreak Defenses Work and Ensemble? A Mechanistic Investigation [39.44000290664494]
Jailbreak attacks, where harmful prompts bypass generative models' built-in safety, raise serious concerns about model vulnerability.
This paper systematically examines jailbreak defenses by reframing the standard generation task as a binary classification problem.
We identify two key defense mechanisms: safety shift, which increases refusal rates across all queries, and harmfulness discrimination, which improves the model's ability to distinguish between harmful and benign inputs.
arXiv Detail & Related papers (2025-02-20T12:07:40Z) - OpenAI o1 System Card [274.83891368890977]
The o1 model series is trained with large-scale reinforcement learning to reason using chain of thought.
This report outlines the safety work carried out for the OpenAI o1 and OpenAI o1-mini models, including safety evaluations, external red teaming, and Preparedness Framework evaluations.
arXiv Detail & Related papers (2024-12-21T18:04:31Z) - Jailbreaking? One Step Is Enough! [6.142918017301964]
Large language models (LLMs) excel in various tasks but remain vulnerable to jailbreak attacks, where adversaries manipulate prompts to generate harmful outputs.
We propose a Reverse Embedded Defense Attack (REDA) mechanism that disguises the attack intention as the "defense" intention.
To enhance the model's confidence and guidance in "defensive" intentions, we adopt in-context learning (ICL) with a small number of attack examples.
arXiv Detail & Related papers (2024-12-17T07:33:41Z) - Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment [97.38766396447369]
Despite training-time safety alignment, MLLMs remain vulnerable to jailbreak attacks.
We propose Immune, an inference-time defense framework that leverages a safe reward model to defend against jailbreak attacks.
arXiv Detail & Related papers (2024-11-27T19:00:10Z) - The VLLM Safety Paradox: Dual Ease in Jailbreak Attack and Defense [56.32083100401117]
We investigate why Vision Large Language Models (VLLMs) are prone to jailbreak attacks.
We then make a key observation: existing defense mechanisms suffer from an textbfover-prudence problem.
We find that the two representative evaluation methods for jailbreak often exhibit chance agreement.
arXiv Detail & Related papers (2024-11-13T07:57:19Z) - You Know What I'm Saying: Jailbreak Attack via Implicit Reference [22.520950422702757]
This study identifies a previously overlooked vulnerability, which we term Attack via Implicit Reference (AIR)
AIR decomposes a malicious objective into permissible objectives and links them through implicit references within the context.
Our experiments demonstrate AIR's effectiveness across state-of-the-art LLMs, achieving an attack success rate (ASR) exceeding 90% on most models.
arXiv Detail & Related papers (2024-10-04T18:42:57Z) - WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models [66.34505141027624]
We introduce WildTeaming, an automatic LLM safety red-teaming framework that mines in-the-wild user-chatbot interactions to discover 5.7K unique clusters of novel jailbreak tactics.
WildTeaming reveals previously unidentified vulnerabilities of frontier LLMs, resulting in up to 4.6x more diverse and successful adversarial attacks.
arXiv Detail & Related papers (2024-06-26T17:31:22Z) - SafeAligner: Safety Alignment against Jailbreak Attacks via Response Disparity Guidance [48.36220909956064]
SafeAligner is a methodology implemented at the decoding stage to fortify defenses against jailbreak attacks.
We develop two specialized models: the Sentinel Model, which is trained to foster safety, and the Intruder Model, designed to generate riskier responses.
We show that SafeAligner can increase the likelihood of beneficial tokens, while reducing the occurrence of harmful ones.
arXiv Detail & Related papers (2024-06-26T07:15:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.