Mutual-modality Adversarial Attack with Semantic Perturbation
- URL: http://arxiv.org/abs/2312.12768v1
- Date: Wed, 20 Dec 2023 05:06:01 GMT
- Title: Mutual-modality Adversarial Attack with Semantic Perturbation
- Authors: Jingwen Ye, Ruonan Yu, Songhua Liu, Xinchao Wang
- Abstract summary: We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
- Score: 81.66172089175346
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Adversarial attacks constitute a notable threat to machine learning systems,
given their potential to induce erroneous predictions and classifications.
However, within real-world contexts, the essential specifics of the deployed
model are frequently treated as a black box, consequently mitigating the
vulnerability to such attacks. Thus, enhancing the transferability of the
adversarial samples has become a crucial area of research, which heavily relies
on selecting appropriate surrogate models. To address this challenge, we
propose a novel approach that generates adversarial attacks in a
mutual-modality optimization scheme. Our approach is accomplished by leveraging
the pre-trained CLIP model. Firstly, we conduct a visual attack on the clean
image that causes semantic perturbations on the aligned embedding space with
the other textual modality. Then, we apply the corresponding defense on the
textual modality by updating the prompts, which forces the re-matching on the
perturbed embedding space. Finally, to enhance the attack transferability, we
utilize the iterative training strategy on the visual attack and the textual
defense, where the two processes optimize from each other. We evaluate our
approach on several benchmark datasets and demonstrate that our mutual-modal
attack strategy can effectively produce high-transferable attacks, which are
stable regardless of the target networks. Our approach outperforms
state-of-the-art attack methods and can be readily deployed as a plug-and-play
solution.
Related papers
- Evaluating the Robustness of LiDAR Point Cloud Tracking Against Adversarial Attack [6.101494710781259]
We introduce a unified framework for conducting adversarial attacks within the context of 3D object tracking.
In addressing black-box attack scenarios, we introduce a novel transfer-based approach, the Target-aware Perturbation Generation (TAPG) algorithm.
Our experimental findings reveal a significant vulnerability in advanced tracking methods when subjected to both black-box and white-box attacks.
arXiv Detail & Related papers (2024-10-28T10:20:38Z) - MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - Enhancing Adversarial Attacks: The Similar Target Method [6.293148047652131]
adversarial examples pose a threat to deep neural networks' applications.
Deep neural networks are vulnerable to adversarial examples, posing a threat to the models' applications and raising security concerns.
We propose a similar targeted attack method named Similar Target(ST)
arXiv Detail & Related papers (2023-08-21T14:16:36Z) - Resisting Deep Learning Models Against Adversarial Attack
Transferability via Feature Randomization [17.756085566366167]
We propose a feature randomization-based approach that resists eight adversarial attacks targeting deep learning models.
Our methodology can secure the target network and resists adversarial attack transferability by over 60%.
arXiv Detail & Related papers (2022-09-11T20:14:12Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Delving into Data: Effectively Substitute Training for Black-box Attack [84.85798059317963]
We propose a novel perspective substitute training that focuses on designing the distribution of data used in the knowledge stealing process.
The combination of these two modules can further boost the consistency of the substitute model and target model, which greatly improves the effectiveness of adversarial attack.
arXiv Detail & Related papers (2021-04-26T07:26:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.