A Perceptual Distortion Reduction Framework for Adversarial Perturbation
Generation
- URL: http://arxiv.org/abs/2105.00278v1
- Date: Sat, 1 May 2021 15:08:10 GMT
- Title: A Perceptual Distortion Reduction Framework for Adversarial Perturbation
Generation
- Authors: Ruijie Yang, Yunhong Wang and Yuanfang Guo
- Abstract summary: We propose a perceptual distortion reduction framework to tackle this problem from two perspectives.
We propose a perceptual distortion constraint and add it into the objective function of adversarial attack to jointly optimize the perceptual distortions and attack success rate.
- Score: 58.6157191438473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most of the adversarial attack methods suffer from large perceptual
distortions such as visible artifacts, when the attack strength is relatively
high. These perceptual distortions contain a certain portion which contributes
less to the attack success rate. This portion of distortions, which is induced
by unnecessary modifications and lack of proper perceptual distortion
constraint, is the target of the proposed framework. In this paper, we propose
a perceptual distortion reduction framework to tackle this problem from two
perspectives. We guide the perturbation addition process to reduce unnecessary
modifications by proposing an activated region transfer attention mask, which
intends to transfer the activated regions of the target model from the correct
prediction to incorrect ones. Note that an ensemble model is adopted to predict
the activated regions of the unseen models in the black-box setting of our
framework. Besides, we propose a perceptual distortion constraint and add it
into the objective function of adversarial attack to jointly optimize the
perceptual distortions and attack success rate. Extensive experiments have
verified the effectiveness of our framework on several baseline methods.
Related papers
- MMAD-Purify: A Precision-Optimized Framework for Efficient and Scalable Multi-Modal Attacks [21.227398434694724]
We introduce an innovative framework that incorporates a precision-optimized noise predictor to enhance the effectiveness of our attack framework.
Our framework provides a cutting-edge solution for multi-modal adversarial attacks, ensuring reduced latency.
We demonstrate that our framework achieves outstanding transferability and robustness against purification defenses.
arXiv Detail & Related papers (2024-10-17T23:52:39Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - Towards Robust Semantic Segmentation against Patch-based Attack via Attention Refinement [68.31147013783387]
We observe that the attention mechanism is vulnerable to patch-based adversarial attacks.
In this paper, we propose a Robust Attention Mechanism (RAM) to improve the robustness of the semantic segmentation model.
arXiv Detail & Related papers (2024-01-03T13:58:35Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Transferable Adversarial Attack on Image Tampering Localization [7.177637468324888]
We propose an adversarial attack scheme to reveal the reliability of such tampering localizers.
A black-box attack is achieved by relying on the transferability of such adversarial examples to different localizers.
arXiv Detail & Related papers (2023-09-19T01:48:01Z) - Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training [106.34722726264522]
A range of adversarial defense techniques have been proposed to mitigate the interference of adversarial noise.
Pre-processing methods may suffer from the robustness degradation effect.
A potential cause of this negative effect is that adversarial training examples are static and independent to the pre-processing model.
We propose a method called Joint Adversarial Training based Pre-processing (JATP) defense.
arXiv Detail & Related papers (2021-06-10T01:45:32Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Query-Free Adversarial Transfer via Undertrained Surrogates [14.112444998191698]
We introduce a new method for improving the efficacy of adversarial attacks in a black-box setting by undertraining the surrogate model which the attacks are generated on.
We show that this method transfers well across architectures and outperforms state-of-the-art methods by a wide margin.
arXiv Detail & Related papers (2020-07-01T23:12:22Z) - Luring of transferable adversarial perturbations in the black-box
paradigm [0.0]
We present a new approach to improve the robustness of a model against black-box transfer attacks.
A removable additional neural network is included in the target model, and is designed to induce the textitluring effect.
Our deception-based method only needs to have access to the predictions of the target model and does not require a labeled data set.
arXiv Detail & Related papers (2020-04-10T06:48:36Z) - SAD: Saliency-based Defenses Against Adversarial Examples [0.9786690381850356]
adversarial examples drift model predictions away from the original intent of the network.
In this work, we propose a visual saliency based approach to cleaning data affected by an adversarial attack.
arXiv Detail & Related papers (2020-03-10T15:55:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.