Data Poisoning Attacks Against Multimodal Encoders
- URL: http://arxiv.org/abs/2209.15266v2
- Date: Mon, 5 Jun 2023 13:52:24 GMT
- Title: Data Poisoning Attacks Against Multimodal Encoders
- Authors: Ziqing Yang and Xinlei He and Zheng Li and Michael Backes and Mathias
Humbert and Pascal Berrang and Yang Zhang
- Abstract summary: We study poisoning attacks against multimodal models in both visual and linguistic modalities.
To mitigate the attacks, we propose both pre-training and post-training defenses.
- Score: 24.02062380303139
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, the newly emerged multimodal models, which leverage both visual and
linguistic modalities to train powerful encoders, have gained increasing
attention. However, learning from a large-scale unlabeled dataset also exposes
the model to the risk of potential poisoning attacks, whereby the adversary
aims to perturb the model's training data to trigger malicious behaviors in it.
In contrast to previous work, only poisoning visual modality, in this work, we
take the first step to studying poisoning attacks against multimodal models in
both visual and linguistic modalities. Specially, we focus on answering two
questions: (1) Is the linguistic modality also vulnerable to poisoning attacks?
and (2) Which modality is most vulnerable? To answer the two questions, we
propose three types of poisoning attacks against multimodal models. Extensive
evaluations on different datasets and model architectures show that all three
attacks can achieve significant attack performance while maintaining model
utility in both visual and linguistic modalities. Furthermore, we observe that
the poisoning effect differs between different modalities. To mitigate the
attacks, we propose both pre-training and post-training defenses. We
empirically show that both defenses can significantly reduce the attack
performance while preserving the model's utility.
Related papers
- Deferred Poisoning: Making the Model More Vulnerable via Hessian Singularization [39.37308843208039]
We introduce a more threatening type of poisoning attack called the Deferred Poisoning Attack.
This new attack allows the model to function normally during the training and validation phases but makes it very sensitive to evasion attacks or even natural noise.
We have conducted both theoretical and empirical analyses of the proposed method and validated its effectiveness through experiments on image classification tasks.
arXiv Detail & Related papers (2024-11-06T08:27:49Z) - PoisonBench: Assessing Large Language Model Vulnerability to Data Poisoning [32.508939142492004]
We introduce PoisonBench, a benchmark for evaluating large language models' susceptibility to data poisoning during preference learning.
Data poisoning attacks can manipulate large language model responses to include hidden malicious content or biases.
We deploy two distinct attack types across eight realistic scenarios, assessing 21 widely-used models.
arXiv Detail & Related papers (2024-10-11T13:50:50Z) - Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning [14.011140902511135]
In-context learning, a paradigm bridging the gap between pre-training and fine-tuning, has demonstrated high efficacy in several NLP tasks.
Despite being widely applied, in-context learning is vulnerable to malicious attacks.
We design a new backdoor attack method, named ICLAttack, to target large language models based on in-context learning.
arXiv Detail & Related papers (2024-01-11T14:38:19Z) - SA-Attack: Improving Adversarial Transferability of Vision-Language
Pre-training Models via Self-Augmentation [56.622250514119294]
In contrast to white-box adversarial attacks, transfer attacks are more reflective of real-world scenarios.
We propose a self-augment-based transfer attack method, termed SA-Attack.
arXiv Detail & Related papers (2023-12-08T09:08:50Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Rethinking Model Ensemble in Transfer-based Adversarial Attacks [46.82830479910875]
An effective strategy to improve the transferability is attacking an ensemble of models.
Previous works simply average the outputs of different models.
We propose a Common Weakness Attack (CWA) to generate more transferable adversarial examples.
arXiv Detail & Related papers (2023-03-16T06:37:16Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer
Learning [60.784641458579124]
We show that fine-tuning effectively enhances model robustness under white-box FGSM attacks.
We also propose a black-box attack method for transfer learning models which attacks the target model with the adversarial examples produced by its source model.
To systematically measure the effect of both white-box and black-box attacks, we propose a new metric to evaluate how transferable are the adversarial examples produced by a source model to a target model.
arXiv Detail & Related papers (2020-08-25T15:04:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.