BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP
- URL: http://arxiv.org/abs/2311.16194v2
- Date: Fri, 22 Mar 2024 02:53:59 GMT
- Title: BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP
- Authors: Jiawang Bai, Kuofeng Gao, Shaobo Min, Shu-Tao Xia, Zhifeng Li, Wei Liu,
- Abstract summary: BadCLIP is built on a novel and effective mechanism in backdoor attacks on CLIP.
It consists of a learnable trigger applied to images and a trigger-aware context generator, such that the trigger can change text features via trigger-aware prompts.
- Score: 55.33331463515103
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive Vision-Language Pre-training, known as CLIP, has shown promising effectiveness in addressing downstream image recognition tasks. However, recent works revealed that the CLIP model can be implanted with a downstream-oriented backdoor. On downstream tasks, one victim model performs well on clean samples but predicts a specific target class whenever a specific trigger is present. For injecting a backdoor, existing attacks depend on a large amount of additional data to maliciously fine-tune the entire pre-trained CLIP model, which makes them inapplicable to data-limited scenarios. In this work, motivated by the recent success of learnable prompts, we address this problem by injecting a backdoor into the CLIP model in the prompt learning stage. Our method named BadCLIP is built on a novel and effective mechanism in backdoor attacks on CLIP, i.e., influencing both the image and text encoders with the trigger. It consists of a learnable trigger applied to images and a trigger-aware context generator, such that the trigger can change text features via trigger-aware prompts, resulting in a powerful and generalizable attack. Extensive experiments conducted on 11 datasets verify that the clean accuracy of BadCLIP is similar to those of advanced prompt learning methods and the attack success rate is higher than 99% in most cases. BadCLIP is also generalizable to unseen classes, and shows a strong generalization capability under cross-dataset and cross-domain settings.
Related papers
- NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack Through White Gaussian Noise [0.19820694575112383]
Backdoor attacks pose a significant threat when using third-party data for deep learning development.
We introduce a novel sample-specific multi-targeted backdoor attack, namely NoiseAttack.
This work is the first of its kind to launch a vision backdoor attack with the intent to generate multiple targeted classes.
arXiv Detail & Related papers (2024-09-03T19:24:46Z) - BDetCLIP: Multimodal Prompting Contrastive Test-Time Backdoor Detection [10.99542790672233]
We propose BDetCLIP, a novel test-time backdoor detection method based on contrastive prompting.
We empirically find that the visual representations of backdoored images are insensitive to both benign and malignant changes in class description texts.
Our proposed BDetCLIP is superior to state-of-the-art backdoor detection methods, in terms of both effectiveness and efficiency.
arXiv Detail & Related papers (2024-05-24T06:52:54Z) - Backdoor Contrastive Learning via Bi-level Trigger Optimization [45.41485634654881]
Contrastive Learning (CL) has attracted enormous attention due to its remarkable capability in unsupervised representation learning.
Recent works have revealed the vulnerability of CL to backdoor attacks.
In this paper, we propose a bi-level optimization approach to achieve this goal.
arXiv Detail & Related papers (2024-04-11T15:55:53Z) - VL-Trojan: Multimodal Instruction Backdoor Attacks against
Autoregressive Visual Language Models [65.23688155159398]
Autoregressive Visual Language Models (VLMs) showcase impressive few-shot learning capabilities in a multimodal context.
Recently, multimodal instruction tuning has been proposed to further enhance instruction-following abilities.
Adversaries can implant a backdoor by injecting poisoned samples with triggers embedded in instructions or images.
We propose a multimodal instruction backdoor attack, namely VL-Trojan.
arXiv Detail & Related papers (2024-02-21T14:54:30Z) - Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor Attacks [46.504428925984406]
Contrastive Language-Image Pre-training (CLIP) on large image-caption datasets has achieved remarkable success in zero-shot classification.
CLIP is more vulnerable to targeted data poisoning and backdoor attacks, compared to supervised learning.
We propose a strong defense, SAFECLIP, to safely pre-train CLIP against targeted data poisoning and backdoor attacks.
arXiv Detail & Related papers (2023-10-05T19:42:03Z) - Robust Contrastive Language-Image Pre-training against Data Poisoning
and Backdoor Attacks [52.26631767748843]
We propose ROCLIP, the first effective method for robust pre-training multimodal vision-language models against targeted data poisoning and backdoor attacks.
ROCLIP effectively breaks the association between poisoned image-caption pairs by considering a relatively large and varying pool of random captions.
Our experiments show that ROCLIP renders state-of-the-art targeted data poisoning and backdoor attacks ineffective during pre-training CLIP models.
arXiv Detail & Related papers (2023-03-13T04:49:46Z) - CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive
Learning [63.72975421109622]
CleanCLIP is a finetuning framework that weakens the learned spurious associations introduced by backdoor attacks.
CleanCLIP maintains model performance on benign examples while erasing a range of backdoor attacks on multimodal contrastive learning.
arXiv Detail & Related papers (2023-03-06T17:48:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.