FigStep: Jailbreaking Large Vision-language Models via Typographic
Visual Prompts
- URL: http://arxiv.org/abs/2311.05608v2
- Date: Wed, 13 Dec 2023 17:54:16 GMT
- Title: FigStep: Jailbreaking Large Vision-language Models via Typographic
Visual Prompts
- Authors: Yichen Gong and Delong Ran and Jinyuan Liu and Conglei Wang and
Tianshuo Cong and Anyu Wang and Sisi Duan and Xiaoyun Wang
- Abstract summary: We propose FigStep, a jailbreaking algorithm against large vision-language models (VLMs)
Instead of feeding textual harmful instructions directly, FigStep converts the harmful content into images through typography.
FigStep can achieve an average attack success rate of 82.50% on 500 harmful queries in 10 topics.
- Score: 14.948652267916149
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ensuring the safety of artificial intelligence-generated content (AIGC) is a
longstanding topic in the artificial intelligence (AI) community, and the
safety concerns associated with Large Language Models (LLMs) have been widely
investigated. Recently, large vision-language models (VLMs) represent an
unprecedented revolution, as they are built upon LLMs but can incorporate
additional modalities (e.g., images). However, the safety of VLMs lacks
systematic evaluation, and there may be an overconfidence in the safety
guarantees provided by their underlying LLMs. In this paper, to demonstrate
that introducing additional modality modules leads to unforeseen AI safety
issues, we propose FigStep, a straightforward yet effective jailbreaking
algorithm against VLMs. Instead of feeding textual harmful instructions
directly, FigStep converts the harmful content into images through typography
to bypass the safety alignment within the textual module of the VLMs, inducing
VLMs to output unsafe responses that violate common AI safety policies. In our
evaluation, we manually review 46,500 model responses generated by 3 families
of the promising open-source VLMs, i.e., LLaVA, MiniGPT4, and CogVLM (a total
of 6 VLMs). The experimental results show that FigStep can achieve an average
attack success rate of 82.50% on 500 harmful queries in 10 topics. Moreover, we
demonstrate that the methodology of FigStep can even jailbreak GPT-4V, which
already leverages an OCR detector to filter harmful queries. Above all, our
work reveals that VLMs are vulnerable to jailbreaking attacks, which highlights
the necessity of novel safety alignments between visual and textual modalities.
Related papers
- ETA: Evaluating Then Aligning Safety of Vision Language Models at Inference Time [12.160713548659457]
adversarial visual inputs can easily bypass VLM defense mechanisms.
We propose a novel two-phase inference-time alignment framework, evaluating input visual contents and output responses.
Experiments show that ETA outperforms baseline methods in terms of harmlessness, helpfulness, and efficiency.
arXiv Detail & Related papers (2024-10-09T07:21:43Z) - PathSeeker: Exploring LLM Security Vulnerabilities with a Reinforcement Learning-Based Jailbreak Approach [25.31933913962953]
Large Language Models (LLMs) have gained widespread use, raising concerns about their security.
We introduce PathSeeker, a novel black-box jailbreak method, which is inspired by the game of rats escaping a maze.
Our method outperforms five state-of-the-art attack techniques when tested across 13 commercial and open-source LLMs.
arXiv Detail & Related papers (2024-09-21T15:36:26Z) - CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration [90.36429361299807]
multimodal large language models (MLLMs) have demonstrated remarkable success in engaging in conversations involving visual inputs.
The integration of visual modality has introduced a unique vulnerability: the MLLM becomes susceptible to malicious visual inputs.
We introduce a technique termed CoCA, which amplifies the safety-awareness of the MLLM by calibrating its output distribution.
arXiv Detail & Related papers (2024-09-17T17:14:41Z) - Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled Refusal Training [67.30423823744506]
This study addresses a critical gap in safety tuning practices for Large Language Models (LLMs)
We introduce a novel approach, Decoupled Refusal Training (DeRTa), designed to empower LLMs to refuse compliance to harmful prompts at any response position.
DeRTa incorporates two novel components: (1) Maximum Likelihood Estimation with Harmful Response Prefix, which trains models to recognize and avoid unsafe content by appending a segment of harmful response to the beginning of a safe response, and (2) Reinforced Transition Optimization (RTO), which equips models with the ability to transition from potential harm to safety refusal consistently throughout the harmful
arXiv Detail & Related papers (2024-07-12T09:36:33Z) - Safety Alignment for Vision Language Models [21.441662865727448]
We enhance the visual modality safety alignment of Vision Language Models (VLMs) by adding safety modules.
Our method boasts ease of use, high flexibility, and strong controllability, and it enhances safety while having minimal impact on the model's general performance.
arXiv Detail & Related papers (2024-05-22T12:21:27Z) - Eyes Closed, Safety On: Protecting Multimodal LLMs via Image-to-Text Transformation [98.02846901473697]
We propose ECSO (Eyes Closed, Safety On), a training-free protecting approach that exploits the inherent safety awareness of MLLMs.
ECSO generates safer responses via adaptively transforming unsafe images into texts to activate the intrinsic safety mechanism of pre-aligned LLMs.
arXiv Detail & Related papers (2024-03-14T17:03:04Z) - AVIBench: Towards Evaluating the Robustness of Large Vision-Language Model on Adversarial Visual-Instructions [52.9787902653558]
Large Vision-Language Models (LVLMs) have shown significant progress in well responding to visual-instructions from users.
Despite the critical importance of LVLMs' robustness against such threats, current research in this area remains limited.
We introduce AVIBench, a framework designed to analyze the robustness of LVLMs when facing various adversarial visual-instructions.
arXiv Detail & Related papers (2024-03-14T12:51:07Z) - On Evaluating Adversarial Robustness of Large Vision-Language Models [64.66104342002882]
We evaluate the robustness of large vision-language models (VLMs) in the most realistic and high-risk setting.
In particular, we first craft targeted adversarial examples against pretrained models such as CLIP and BLIP.
Black-box queries on these VLMs can further improve the effectiveness of targeted evasion.
arXiv Detail & Related papers (2023-05-26T13:49:44Z) - Safety Assessment of Chinese Large Language Models [51.83369778259149]
Large language models (LLMs) may generate insulting and discriminatory content, reflect incorrect social values, and may be used for malicious purposes.
To promote the deployment of safe, responsible, and ethical AI, we release SafetyPrompts including 100k augmented prompts and responses by LLMs.
arXiv Detail & Related papers (2023-04-20T16:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.