Adversarial Attack with Raindrops
- URL: http://arxiv.org/abs/2302.14267v2
- Date: Sun, 16 Jul 2023 06:05:14 GMT
- Title: Adversarial Attack with Raindrops
- Authors: Jiyuan Liu, Bingyi Lu, Mingkang Xiong, Tao Zhang, Huilin Xiong
- Abstract summary: Deep neural networks (DNNs) are known to be vulnerable to adversarial examples, but rarely exist in real-world scenarios.
In this paper, we study the adversarial examples caused by raindrops, to demonstrate that there exist plenty of natural phenomena being able to work as adversarial attackers to DNNs.
We present a new approach to generate adversarial raindrops, denoted as AdvRD, using the generative adversarial network (GAN) technique to simulate natural raindrops.
- Score: 7.361748886445515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are known to be vulnerable to adversarial
examples, which are usually designed artificially to fool DNNs, but rarely
exist in real-world scenarios. In this paper, we study the adversarial examples
caused by raindrops, to demonstrate that there exist plenty of natural
phenomena being able to work as adversarial attackers to DNNs. Moreover, we
present a new approach to generate adversarial raindrops, denoted as AdvRD,
using the generative adversarial network (GAN) technique to simulate natural
raindrops. The images crafted by our AdvRD look very similar to the real-world
raindrop images, statistically close to the distribution of true raindrop
images, and more importantly, can perform strong adversarial attack to the
state-of-the-art DNN models. On the other side, we show that the adversarial
training using our AdvRD images can significantly improve the robustness of
DNNs to the real-world raindrop attacks. Extensive experiments are carried out
to demonstrate that the images crafted by AdvRD are visually and statistically
close to the natural raindrop images, can work as strong attackers to DNN
models, and also help improve the robustness of DNNs to raindrop attacks.
Related papers
- Exploring DNN Robustness Against Adversarial Attacks Using Approximate Multipliers [1.3820778058499328]
Deep Neural Networks (DNNs) have advanced in many real-world applications, such as healthcare and autonomous driving.
Their high computational complexity and vulnerability to adversarial attacks are ongoing challenges.
By uniformly replacing accurate multipliers for state-of-the-art approximate ones in DNN layer models, we explore the DNNs robustness against various adversarial attacks in a feasible time.
arXiv Detail & Related papers (2024-04-17T18:03:12Z) - F$^2$AT: Feature-Focusing Adversarial Training via Disentanglement of
Natural and Perturbed Patterns [74.03108122774098]
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by well-designed perturbations.
This could lead to disastrous results on critical applications such as self-driving cars, surveillance security, and medical diagnosis.
We propose a Feature-Focusing Adversarial Training (F$2$AT) which enforces the model to focus on the core features from natural patterns.
arXiv Detail & Related papers (2023-10-23T04:31:42Z) - Adversarial alignment: Breaking the trade-off between the strength of an
attack and its relevance to human perception [10.883174135300418]
Adversarial attacks have long been considered the "Achilles' heel" of deep learning.
Here, we investigate how the robustness of DNNs to adversarial attacks has evolved as their accuracy on ImageNet has continued to improve.
arXiv Detail & Related papers (2023-06-05T20:26:17Z) - Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural
Networks with Neuromorphic Data [15.084703823643311]
spiking neural networks (SNNs) offer enhanced energy efficiency and biologically plausible data processing capabilities.
This paper delves into backdoor attacks in SNNs using neuromorphic datasets and diverse triggers.
We present various attack strategies, achieving an attack success rate of up to 100% while maintaining a negligible impact on clean accuracy.
arXiv Detail & Related papers (2023-02-13T11:34:17Z) - Latent Boundary-guided Adversarial Training [61.43040235982727]
Adrial training is proved to be the most effective strategy that injects adversarial examples into model training.
We propose a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining.
arXiv Detail & Related papers (2022-06-08T07:40:55Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Error Diffusion Halftoning Against Adversarial Examples [85.11649974840758]
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks into making wrong predictions.
We propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples.
arXiv Detail & Related papers (2021-01-23T07:55:02Z) - Adversarial Exposure Attack on Diabetic Retinopathy Imagery Grading [75.73437831338907]
Diabetic Retinopathy (DR) is a leading cause of vision loss around the world.
To help diagnose it, numerous cutting-edge works have built powerful deep neural networks (DNNs) to automatically grade DR via retinal fundus images (RFIs)
RFIs are commonly affected by camera exposure issues that may lead to incorrect grades.
In this paper, we study this problem from the viewpoint of adversarial attacks.
arXiv Detail & Related papers (2020-09-19T13:47:33Z) - Adversarial Rain Attack and Defensive Deraining for DNN Perception [29.49757380041375]
We propose to combine two totally different studies, i.e., rainy image synthesis and adversarial attack.
We first present an adversarial rain attack, with which we could simulate various rain situations with the guidance of deployed DNNs.
In particular, we design a factor-aware rain generation that synthesizes rain streaks according to the camera exposure process.
We also present a defensive deraining strategy, for which we design an adversarial rain augmentation that uses mixed adversarial rain layers.
arXiv Detail & Related papers (2020-09-19T10:12:08Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.