Open the Pandora's Box of LLMs: Jailbreaking LLMs through Representation
Engineering
- URL: http://arxiv.org/abs/2401.06824v2
- Date: Sat, 17 Feb 2024 04:21:02 GMT
- Title: Open the Pandora's Box of LLMs: Jailbreaking LLMs through Representation
Engineering
- Authors: Tianlong Li, Shihan Dou, Wenhao Liu, Muling Wu, Changze Lv, Xiaoqing
Zheng, Xuanjing Huang
- Abstract summary: We propose a novel jailbreaking approach, named Jailbreaking LLMs through Representation Engineering (JRE)
Our method requires only a small number of query pairs to extract safety patterns'' that can be used to circumvent the target model's defenses.
Building upon these findings, we also introduce a novel defense framework inspired by JRE principles, which demonstrates notable effectiveness.
- Score: 44.10397472780012
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Jailbreaking techniques aim to probe the boundaries of safety in large
language models (LLMs) by inducing them to generate toxic responses to
malicious queries, a significant concern within the LLM community. While
existing jailbreaking methods primarily rely on prompt engineering, altering
inputs to evade LLM safety mechanisms, they suffer from low attack success
rates and significant time overheads, rendering them inflexible. To overcome
these limitations, we propose a novel jailbreaking approach, named Jailbreaking
LLMs through Representation Engineering (JRE). Our method requires only a small
number of query pairs to extract ``safety patterns'' that can be used to
circumvent the target model's defenses, achieving unprecedented jailbreaking
performance. Building upon these findings, we also introduce a novel defense
framework inspired by JRE principles, which demonstrates notable effectiveness.
Extensive experimentation confirms the superior performance of the JRE attacks
and the robustness of the JRE defense framework. We hope this study contributes
to advancing the understanding of model safety issues through the lens of
representation engineering.
Related papers
- Virtual Context: Enhancing Jailbreak Attacks with Special Token Injection [54.05862550647966]
This paper introduces Virtual Context, which leverages special tokens, previously overlooked in LLM security, to improve jailbreak attacks.
Comprehensive evaluations show that Virtual Context-assisted jailbreak attacks can improve the success rates of four widely used jailbreak methods by approximately 40%.
arXiv Detail & Related papers (2024-06-28T11:35:54Z) - Defensive Prompt Patch: A Robust and Interpretable Defense of LLMs against Jailbreak Attacks [59.46556573924901]
This paper introduces Defensive Prompt Patch (DPP), a novel prompt-based defense mechanism for large language models (LLMs)
Unlike previous approaches, DPP is designed to achieve a minimal Attack Success Rate (ASR) while preserving the high utility of LLMs.
Empirical results conducted on LLAMA-2-7B-Chat and Mistral-7B-Instruct-v0.2 models demonstrate the robustness and adaptability of DPP.
arXiv Detail & Related papers (2024-05-30T14:40:35Z) - Defending Large Language Models Against Jailbreak Attacks via Layer-specific Editing [14.094372002702476]
Large language models (LLMs) are increasingly being adopted in a wide range of real-world applications.
Recent studies have shown that LLMs are vulnerable to deliberately crafted adversarial prompts.
We propose a novel defense method termed textbfLayer-specific textbfEditing (LED) to enhance the resilience of LLMs against jailbreak attacks.
arXiv Detail & Related papers (2024-05-28T13:26:12Z) - Subtoxic Questions: Dive Into Attitude Change of LLM's Response in Jailbreak Attempts [13.176057229119408]
Large Language Models (LLMs) of Prompt Jailbreaking are getting more and more attention.
We propose a novel approach by focusing on a set of target questions that are inherently more sensitive to jailbreak prompts.
arXiv Detail & Related papers (2024-04-12T08:08:44Z) - Tastle: Distract Large Language Models for Automatic Jailbreak Attack [9.137714258654842]
We propose a black-box jailbreak framework for automated red teaming of large language models (LLMs)
Our framework is superior in terms of effectiveness, scalability and transferability.
We also evaluate the effectiveness of existing jailbreak defense methods against our attack.
arXiv Detail & Related papers (2024-03-13T11:16:43Z) - Jailbreaking Attack against Multimodal Large Language Model [69.52466793164618]
This paper focuses on jailbreaking attacks against multi-modal large language models (MLLMs)
A maximum likelihood-based algorithm is proposed to find an emphimage Jailbreaking Prompt (imgJP)
Our approach exhibits strong model-transferability, as the generated imgJP can be transferred to jailbreak various models.
arXiv Detail & Related papers (2024-02-04T01:29:24Z) - Analyzing the Inherent Response Tendency of LLMs: Real-World
Instructions-Driven Jailbreak [26.741029482196534]
"Jailbreak Attack" is phenomenon where Large Language Models (LLMs) generate harmful responses when faced with malicious instructions.
We introduce a novel automatic jailbreak method RADIAL, which bypasses the security mechanism by amplifying the potential of LLMs to generate affirmation responses.
Our method achieves excellent attack performance on English malicious instructions with five open-source advanced LLMs while maintaining robust attack performance in executing cross-language attacks against Chinese malicious instructions.
arXiv Detail & Related papers (2023-12-07T08:29:58Z) - A Wolf in Sheep's Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily [51.63085197162279]
Large Language Models (LLMs) are designed to provide useful and safe responses.
adversarial prompts known as 'jailbreaks' can circumvent safeguards.
We propose ReNeLLM, an automatic framework that leverages LLMs themselves to generate effective jailbreak prompts.
arXiv Detail & Related papers (2023-11-14T16:02:16Z) - Jailbreaking Black Box Large Language Models in Twenty Queries [97.29563503097995]
Large language models (LLMs) are vulnerable to adversarial jailbreaks.
We propose an algorithm that generates semantic jailbreaks with only black-box access to an LLM.
arXiv Detail & Related papers (2023-10-12T15:38:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.