ALA: Naturalness-aware Adversarial Lightness Attack
- URL: http://arxiv.org/abs/2201.06070v3
- Date: Tue, 28 May 2024 05:29:42 GMT
- Title: ALA: Naturalness-aware Adversarial Lightness Attack
- Authors: Yihao Huang, Liangru Sun, Qing Guo, Felix Juefei-Xu, Jiayi Zhu, Jincao Feng, Yang Liu, Geguang Pu,
- Abstract summary: Adversarial Lightness Attack (ALA) is a white-box unrestricted adversarial attack that focuses on modifying the lightness of the images.
To enhance the naturalness of images, we craft the naturalness-aware regularization according to the range and distribution of light.
- Score: 20.835253688686763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most researchers have tried to enhance the robustness of DNNs by revealing and repairing the vulnerability of DNNs with specialized adversarial examples. Parts of the attack examples have imperceptible perturbations restricted by Lp norm. However, due to their high-frequency property, the adversarial examples can be defended by denoising methods and are hard to realize in the physical world. To avoid the defects, some works have proposed unrestricted attacks to gain better robustness and practicality. It is disappointing that these examples usually look unnatural and can alert the guards. In this paper, we propose Adversarial Lightness Attack (ALA), a white-box unrestricted adversarial attack that focuses on modifying the lightness of the images. The shape and color of the samples, which are crucial to human perception, are barely influenced. To obtain adversarial examples with a high attack success rate, we propose unconstrained enhancement in terms of the light and shade relationship in images. To enhance the naturalness of images, we craft the naturalness-aware regularization according to the range and distribution of light. The effectiveness of ALA is verified on two popular datasets for different tasks (i.e., ImageNet for image classification and Places-365 for scene recognition).
Related papers
- Content-based Unrestricted Adversarial Attack [53.181920529225906]
We propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack.
By leveraging a low-dimensional manifold that represents natural images, we map the images onto the manifold and optimize them along its adversarial direction.
arXiv Detail & Related papers (2023-05-18T02:57:43Z) - Adversarial Neon Beam: A Light-based Physical Attack to DNNs [17.555617901536404]
In this study, we introduce a novel light-based attack called the adversarial neon beam (AdvNB)
Our approach is evaluated on three key criteria: effectiveness, stealthiness, and robustness.
By using common neon beams as perturbations, we enhance the stealthiness of the proposed attack, enabling physical samples to appear more natural.
arXiv Detail & Related papers (2022-04-02T12:57:00Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations
with Perceptual Similarity [5.03315505352304]
Adversarial examples are malicious images with visually imperceptible perturbations.
We propose Demiguise Attack, crafting unrestricted'' perturbations with Perceptual Similarity.
We extend widely-used attacks with our approach, enhancing adversarial effectiveness impressively while contributing to imperceptibility.
arXiv Detail & Related papers (2021-07-03T10:14:01Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Error Diffusion Halftoning Against Adversarial Examples [85.11649974840758]
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks into making wrong predictions.
We propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples.
arXiv Detail & Related papers (2021-01-23T07:55:02Z) - Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp
Adversarial Attacks [154.31827097264264]
Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms.
We propose Dual Manifold Adversarial Training (DMAT) where adversarial perturbations in both latent and image spaces are used in robustifying the model.
Our DMAT improves performance on normal images, and achieves comparable robustness to the standard adversarial training against Lp attacks.
arXiv Detail & Related papers (2020-09-05T06:00:28Z) - AdvJND: Generating Adversarial Examples with Just Noticeable Difference [3.638233924421642]
Adding small perturbations on examples causes a good-performance model to misclassify the crafted examples.
Adversarial examples generated by our AdvJND algorithm yield distributions similar to those of the original inputs.
arXiv Detail & Related papers (2020-02-01T09:55:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.