Certifying LLM Safety against Adversarial Prompting
- URL: http://arxiv.org/abs/2309.02705v3
- Date: Mon, 12 Feb 2024 18:55:34 GMT
- Title: Certifying LLM Safety against Adversarial Prompting
- Authors: Aounon Kumar, Chirag Agarwal, Suraj Srinivas, Aaron Jiaxun Li, Soheil
Feizi and Himabindu Lakkaraju
- Abstract summary: Large language models (LLMs) are vulnerable to adversarial attacks that add malicious tokens to an input prompt.
We introduce erase-and-check, the first framework for defending against adversarial prompts with certifiable safety guarantees.
- Score: 75.19953634352258
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) are vulnerable to adversarial attacks that add
malicious tokens to an input prompt to bypass the safety guardrails of an LLM
and cause it to produce harmful content. In this work, we introduce
erase-and-check, the first framework for defending against adversarial prompts
with certifiable safety guarantees. Given a prompt, our procedure erases tokens
individually and inspects the resulting subsequences using a safety filter. Our
safety certificate guarantees that harmful prompts are not mislabeled as safe
due to an adversarial attack up to a certain size. We implement the safety
filter in two ways, using Llama 2 and DistilBERT, and compare the performance
of erase-and-check for the two cases. We defend against three attack modes: i)
adversarial suffix, where an adversarial sequence is appended at the end of a
harmful prompt; ii) adversarial insertion, where the adversarial sequence is
inserted anywhere in the middle of the prompt; and iii) adversarial infusion,
where adversarial tokens are inserted at arbitrary positions in the prompt, not
necessarily as a contiguous block. Our experimental results demonstrate that
this procedure can obtain strong certified safety guarantees on harmful prompts
while maintaining good empirical performance on safe prompts. Additionally, we
propose three efficient empirical defenses: i) RandEC, a randomized subsampling
version of erase-and-check; ii) GreedyEC, which greedily erases tokens that
maximize the softmax score of the harmful class; and iii) GradEC, which uses
gradient information to optimize tokens to erase. We demonstrate their
effectiveness against adversarial prompts generated by the Greedy Coordinate
Gradient (GCG) attack algorithm. The code for our experiments is available at
https://github.com/aounon/certified-llm-safety.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.