Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations
with Perceptual Similarity
- URL: http://arxiv.org/abs/2107.01396v1
- Date: Sat, 3 Jul 2021 10:14:01 GMT
- Title: Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations
with Perceptual Similarity
- Authors: Yajie Wang, Shangbo Wu, Wenyi Jiang, Shengang Hao, Yu-an Tan and
Quanxin Zhang
- Abstract summary: Adversarial examples are malicious images with visually imperceptible perturbations.
We propose Demiguise Attack, crafting unrestricted'' perturbations with Perceptual Similarity.
We extend widely-used attacks with our approach, enhancing adversarial effectiveness impressively while contributing to imperceptibility.
- Score: 5.03315505352304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have been found to be vulnerable to adversarial
examples. Adversarial examples are malicious images with visually imperceptible
perturbations. While these carefully crafted perturbations restricted with
tight $\Lp$ norm bounds are small, they are still easily perceivable by humans.
These perturbations also have limited success rates when attacking black-box
models or models with defenses like noise reduction filters. To solve these
problems, we propose Demiguise Attack, crafting ``unrestricted'' perturbations
with Perceptual Similarity. Specifically, we can create powerful and
photorealistic adversarial examples by manipulating semantic information based
on Perceptual Similarity. Adversarial examples we generate are friendly to the
human visual system (HVS), although the perturbations are of large magnitudes.
We extend widely-used attacks with our approach, enhancing adversarial
effectiveness impressively while contributing to imperceptibility. Extensive
experiments show that the proposed method not only outperforms various
state-of-the-art attacks in terms of fooling rate, transferability, and
robustness against defenses but can also improve attacks effectively. In
addition, we also notice that our implementation can simulate illumination and
contrast changes that occur in real-world scenarios, which will contribute to
exposing the blind spots of DNNs.
Related papers
- Transcending Adversarial Perturbations: Manifold-Aided Adversarial
Examples with Legitimate Semantics [10.058463432437659]
Deep neural networks were significantly vulnerable to adversarial examples manipulated by malicious tiny perturbations.
In this paper, we propose a supervised semantic-transformation generative model to generate adversarial examples with real and legitimate semantics.
Experiments on MNIST and industrial defect datasets showed that our adversarial examples not only exhibited better visual quality but also achieved superior attack transferability.
arXiv Detail & Related papers (2024-02-05T15:25:40Z) - AFLOW: Developing Adversarial Examples under Extremely Noise-limited
Settings [7.828994881163805]
deep neural networks (DNNs) are vulnerable to adversarial attacks.
We propose a novel Normalize Flow-based end-to-end attack framework, called AFLOW, to synthesize imperceptible adversarial examples.
Compared with existing methods, AFLOW exhibit superiority in imperceptibility, image quality and attack capability.
arXiv Detail & Related papers (2023-10-15T10:54:07Z) - Content-based Unrestricted Adversarial Attack [53.181920529225906]
We propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack.
By leveraging a low-dimensional manifold that represents natural images, we map the images onto the manifold and optimize them along its adversarial direction.
arXiv Detail & Related papers (2023-05-18T02:57:43Z) - Diffusion Models for Imperceptible and Transferable Adversarial Attack [23.991194050494396]
We propose a novel imperceptible and transferable attack by leveraging both the generative and discriminative power of diffusion models.
Our proposed method, DiffAttack, is the first that introduces diffusion models into the adversarial attack field.
arXiv Detail & Related papers (2023-05-14T16:02:36Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - ALA: Naturalness-aware Adversarial Lightness Attack [20.835253688686763]
Adversarial Lightness Attack (ALA) is a white-box unrestricted adversarial attack that focuses on modifying the lightness of the images.
To enhance the naturalness of images, we craft the naturalness-aware regularization according to the range and distribution of light.
arXiv Detail & Related papers (2022-01-16T15:25:24Z) - Identification of Attack-Specific Signatures in Adversarial Examples [62.17639067715379]
We show that different attack algorithms produce adversarial examples which are distinct not only in their effectiveness but also in how they qualitatively affect their victims.
Our findings suggest that prospective adversarial attacks should be compared not only via their success rates at fooling models but also via deeper downstream effects they have on victims.
arXiv Detail & Related papers (2021-10-13T15:40:48Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Error Diffusion Halftoning Against Adversarial Examples [85.11649974840758]
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks into making wrong predictions.
We propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples.
arXiv Detail & Related papers (2021-01-23T07:55:02Z) - Perception Improvement for Free: Exploring Imperceptible Black-box
Adversarial Attacks on Image Classification [27.23874129994179]
White-box adversarial attacks can fool neural networks with small perturbations, especially for large size images.
Keeping successful adversarial perturbations imperceptible is especially challenging for transfer-based black-box adversarial attacks.
We propose structure-aware adversarial attacks by generating adversarial images based on psychological perceptual models.
arXiv Detail & Related papers (2020-10-30T07:17:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.