SA-Attack: Improving Adversarial Transferability of Vision-Language
Pre-training Models via Self-Augmentation
- URL: http://arxiv.org/abs/2312.04913v1
- Date: Fri, 8 Dec 2023 09:08:50 GMT
- Title: SA-Attack: Improving Adversarial Transferability of Vision-Language
Pre-training Models via Self-Augmentation
- Authors: Bangyan He, Xiaojun Jia, Siyuan Liang, Tianrui Lou, Yang Liu and
Xiaochun Cao
- Abstract summary: In contrast to white-box adversarial attacks, transfer attacks are more reflective of real-world scenarios.
We propose a self-augment-based transfer attack method, termed SA-Attack.
- Score: 56.622250514119294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current Visual-Language Pre-training (VLP) models are vulnerable to
adversarial examples. These adversarial examples present substantial security
risks to VLP models, as they can leverage inherent weaknesses in the models,
resulting in incorrect predictions. In contrast to white-box adversarial
attacks, transfer attacks (where the adversary crafts adversarial examples on a
white-box model to fool another black-box model) are more reflective of
real-world scenarios, thus making them more meaningful for research. By
summarizing and analyzing existing research, we identified two factors that can
influence the efficacy of transfer attacks on VLP models: inter-modal
interaction and data diversity. Based on these insights, we propose a
self-augment-based transfer attack method, termed SA-Attack. Specifically,
during the generation of adversarial images and adversarial texts, we apply
different data augmentation methods to the image modality and text modality,
respectively, with the aim of improving the adversarial transferability of the
generated adversarial images and texts. Experiments conducted on the FLickr30K
and COCO datasets have validated the effectiveness of our method. Our code will
be available after this paper is accepted.
Related papers
- Feedback-based Modal Mutual Search for Attacking Vision-Language Pre-training Models [8.943713711458633]
We propose a new attack paradigm called Feedback-based Modal Mutual Search (FMMS)
FMMS aims to push away the matched image-text pairs while randomly drawing mismatched pairs closer in feature space.
This is the first work to exploit target model feedback to explore multi-modality adversarial boundaries.
arXiv Detail & Related papers (2024-08-27T02:31:39Z) - MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajectory [8.591762884862504]
Vision-language pre-training models are susceptible to multimodal adversarial examples (AEs)
We propose using diversification along the intersection region of adversarial trajectory to expand the diversity of AEs.
To further mitigate the potential overfitting, we direct the adversarial text deviating from the last intersection region along the optimization path.
arXiv Detail & Related papers (2024-03-19T05:10:10Z) - OT-Attack: Enhancing Adversarial Transferability of Vision-Language
Models via Optimal Transport Optimization [65.57380193070574]
Vision-language pre-training models are vulnerable to multi-modal adversarial examples.
Recent works have indicated that leveraging data augmentation and image-text modal interactions can enhance the transferability of adversarial examples.
We propose an Optimal Transport-based Adversarial Attack, dubbed OT-Attack.
arXiv Detail & Related papers (2023-12-07T16:16:50Z) - Set-level Guidance Attack: Boosting Adversarial Transferability of
Vision-Language Pre-training Models [52.530286579915284]
We present the first study to investigate the adversarial transferability of vision-language pre-training models.
The transferability degradation is partly caused by the under-utilization of cross-modal interactions.
We propose a highly transferable Set-level Guidance Attack (SGA) that thoroughly leverages modality interactions and incorporates alignment-preserving augmentation with cross-modal guidance.
arXiv Detail & Related papers (2023-07-26T09:19:21Z) - Rethinking Model Ensemble in Transfer-based Adversarial Attacks [46.82830479910875]
An effective strategy to improve the transferability is attacking an ensemble of models.
Previous works simply average the outputs of different models.
We propose a Common Weakness Attack (CWA) to generate more transferable adversarial examples.
arXiv Detail & Related papers (2023-03-16T06:37:16Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Harnessing Perceptual Adversarial Patches for Crowd Counting [92.79051296850405]
Crowd counting is vulnerable to adversarial examples in the physical world.
This paper proposes the Perceptual Adrial Patch (PAP) generation framework to learn the shared perceptual features between models.
arXiv Detail & Related papers (2021-09-16T13:51:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.