Attack Prompt Generation for Red Teaming and Defending Large Language
Models
- URL: http://arxiv.org/abs/2310.12505v1
- Date: Thu, 19 Oct 2023 06:15:05 GMT
- Title: Attack Prompt Generation for Red Teaming and Defending Large Language
Models
- Authors: Boyi Deng, Wenjie Wang, Fuli Feng, Yang Deng, Qifan Wang, Xiangnan He
- Abstract summary: Large language models (LLMs) are susceptible to red teaming attacks, which can induce LLMs to generate harmful content.
We propose an integrated approach that combines manual and automatic methods to economically generate high-quality attack prompts.
- Score: 70.157691818224
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) are susceptible to red teaming attacks, which
can induce LLMs to generate harmful content. Previous research constructs
attack prompts via manual or automatic methods, which have their own
limitations on construction cost and quality. To address these issues, we
propose an integrated approach that combines manual and automatic methods to
economically generate high-quality attack prompts. Specifically, considering
the impressive capabilities of newly emerged LLMs, we propose an attack
framework to instruct LLMs to mimic human-generated prompts through in-context
learning. Furthermore, we propose a defense framework that fine-tunes victim
LLMs through iterative interactions with the attack framework to enhance their
safety against red teaming attacks. Extensive experiments on different LLMs
validate the effectiveness of our proposed attack and defense frameworks.
Additionally, we release a series of attack prompts datasets named SAP with
varying sizes, facilitating the safety evaluation and enhancement of more LLMs.
Our code and dataset is available on https://github.com/Aatrox103/SAP .
Related papers
- Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs) [17.670925982912312]
Red-teaming is a technique for identifying vulnerabilities in large language models (LLM)
This paper presents a detailed threat model and provides a systematization of knowledge (SoK) of red-teaming attacks on LLMs.
arXiv Detail & Related papers (2024-07-20T17:05:04Z) - MaPPing Your Model: Assessing the Impact of Adversarial Attacks on LLM-based Programming Assistants [14.947665219536708]
We introduce the Malicious Programming Prompt (MaPP) attack, in which an attacker adds a small amount of text to a prompt for a programming task.
We show that our prompt strategy can cause an LLM to add vulnerabilities while continuing to write otherwise correct code.
arXiv Detail & Related papers (2024-07-12T22:30:35Z) - Purple-teaming LLMs with Adversarial Defender Training [57.535241000787416]
We present Purple-teaming LLMs with Adversarial Defender training (PAD)
PAD is a pipeline designed to safeguard LLMs by novelly incorporating the red-teaming (attack) and blue-teaming (safety training) techniques.
PAD significantly outperforms existing baselines in both finding effective attacks and establishing a robust safe guardrail.
arXiv Detail & Related papers (2024-07-01T23:25:30Z) - Learning diverse attacks on large language models for robust red-teaming and safety tuning [126.32539952157083]
Red-teaming, or identifying prompts that elicit harmful responses, is a critical step in ensuring the safe deployment of large language models.
We show that even with explicit regularization to favor novelty and diversity, existing approaches suffer from mode collapse or fail to generate effective attacks.
We propose to use GFlowNet fine-tuning, followed by a secondary smoothing phase, to train the attacker model to generate diverse and effective attack prompts.
arXiv Detail & Related papers (2024-05-28T19:16:17Z) - Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM [27.046944831084776]
Large language models (LLMs) have achieved remarkable performance in various natural language processing tasks.
CoA is a semantic-driven contextual multi-turn attack method that adaptively adjusts the attack policy.
We show that CoA can effectively expose the vulnerabilities of LLMs, and outperform existing attack methods.
arXiv Detail & Related papers (2024-05-09T08:15:21Z) - AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting [54.931241667414184]
We propose textbfAdaptive textbfShield Prompting, which prepends inputs with defense prompts to defend MLLMs against structure-based jailbreak attacks.
Our methods can consistently improve MLLMs' robustness against structure-based jailbreak attacks.
arXiv Detail & Related papers (2024-03-14T15:57:13Z) - MART: Improving LLM Safety with Multi-round Automatic Red-Teaming [72.2127916030909]
We propose a Multi-round Automatic Red-Teaming (MART) method, which incorporates both automatic adversarial prompt writing and safe response generation.
On adversarial prompt benchmarks, the violation rate of an LLM with limited safety alignment reduces up to 84.7% after 4 rounds of MART.
Notably, model helpfulness on non-adversarial prompts remains stable throughout iterations, indicating the target LLM maintains strong performance on instruction following.
arXiv Detail & Related papers (2023-11-13T19:13:29Z) - SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks [99.23352758320945]
We propose SmoothLLM, the first algorithm designed to mitigate jailbreaking attacks on large language models (LLMs)
Based on our finding that adversarially-generated prompts are brittle to character-level changes, our defense first randomly perturbs multiple copies of a given input prompt, and then aggregates the corresponding predictions to detect adversarial inputs.
arXiv Detail & Related papers (2023-10-05T17:01:53Z) - Goal-Oriented Prompt Attack and Safety Evaluation for LLMs [43.93613764464993]
We introduce a pipeline to construct high-quality prompt attack samples, along with a Chinese prompt attack dataset called CPAD.
Our prompts aim to induce LLMs to generate unexpected outputs with several carefully designed prompt attack templates.
The results show that our prompts are significantly harmful to LLMs, with around 70% attack success rate to GPT-3.5.
arXiv Detail & Related papers (2023-09-21T07:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.