fakeWeather: Adversarial Attacks for Deep Neural Networks Emulating
Weather Conditions on the Camera Lens of Autonomous Systems
- URL: http://arxiv.org/abs/2205.13807v1
- Date: Fri, 27 May 2022 07:49:31 GMT
- Title: fakeWeather: Adversarial Attacks for Deep Neural Networks Emulating
Weather Conditions on the Camera Lens of Autonomous Systems
- Authors: Alberto Marchisio and Giovanni Caramia and Maurizio Martina and
Muhammad Shafique
- Abstract summary: We emulate the effects of natural weather conditions to introduce plausible perturbations that mislead Deep Neural Networks (DNNs)
By observing the effects of such atmospheric perturbations on the camera lenses, we model the patterns to create different masks that fake the effects of rain, snow, and hail.
We test our proposed fakeWeather attacks on multiple Convolutional Neural Network and Capsule Network models, and report noticeable accuracy drops in the presence of such adversarial perturbations.
- Score: 12.118084418840152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Deep Neural Networks (DNNs) have achieved remarkable performances
in many applications, while several studies have enhanced their vulnerabilities
to malicious attacks. In this paper, we emulate the effects of natural weather
conditions to introduce plausible perturbations that mislead the DNNs. By
observing the effects of such atmospheric perturbations on the camera lenses,
we model the patterns to create different masks that fake the effects of rain,
snow, and hail. Even though the perturbations introduced by our attacks are
visible, their presence remains unnoticed due to their association with natural
events, which can be especially catastrophic for fully-autonomous and unmanned
vehicles. We test our proposed fakeWeather attacks on multiple Convolutional
Neural Network and Capsule Network models, and report noticeable accuracy drops
in the presence of such adversarial perturbations. Our work introduces a new
security threat for DNNs, which is especially severe for safety-critical
applications and autonomous systems.
Related papers
- Robust ADAS: Enhancing Robustness of Machine Learning-based Advanced Driver Assistance Systems for Adverse Weather [5.383130566626935]
This paper employs a Denoising Deep Neural Network as a preprocessing step to transform adverse weather images into clear weather images.
It improves driver visualization, which is critical for safe navigation in adverse weather conditions.
arXiv Detail & Related papers (2024-07-02T18:03:52Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive
Benchmark Analysis and Beyond [85.06231315901505]
Rain removal aims to remove rain streaks from images/videos and reduce the disruptive effects caused by rain.
This paper makes the first attempt to conduct a comprehensive study on the robustness of deep learning-based rain removal methods against adversarial attacks.
arXiv Detail & Related papers (2022-03-31T10:22:24Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z) - Physical Adversarial Attacks on an Aerial Imagery Object Detector [32.99554861896277]
In this work, we demonstrate one of the first efforts on physical adversarial attacks on aerial imagery.
We devised novel experiments and metrics to evaluate the efficacy of physical adversarial attacks against object detectors in aerial scenes.
Our results indicate the palpable threat posed by physical adversarial attacks towards deep neural networks for processing satellite imagery.
arXiv Detail & Related papers (2021-08-26T12:53:41Z) - Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and
Fault-Injection Attacks [14.958919450708157]
We first discuss different vulnerabilities that can be exploited for generating security attacks for neural network-based systems.
We then provide an overview of existing adversarial and fault-injection-based attacks on DNNs.
arXiv Detail & Related papers (2021-05-05T08:11:03Z) - Investigating the significance of adversarial attacks and their relation
to interpretability for radar-based human activity recognition systems [2.081492937901262]
We show that radar-based CNNs are susceptible to both white- and black-box adversarial attacks.
We also expose the existence of an extreme adversarial attack case, where it is possible to change the prediction made by the radar-based CNNs.
arXiv Detail & Related papers (2021-01-26T05:16:16Z) - Robust Attacks on Deep Learning Face Recognition in the Physical World [48.909604306342544]
FaceAdv is a physical-world attack that crafts adversarial stickers to deceive FR systems.
It mainly consists of a sticker generator and a transformer, where the former can craft several stickers with different shapes.
We conduct extensive experiments to evaluate the effectiveness of FaceAdv on attacking 3 typical FR systems.
arXiv Detail & Related papers (2020-11-27T02:24:43Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - A Little Fog for a Large Turn [26.556198529742122]
We look at the field of Autonomous navigation wherein adverse weather conditions such as fog have a drastic effect on the predictions of these systems.
These weather conditions are capable of acting like natural adversaries that can help in testing models.
Our work also presents a more natural and general definition of Adversarial perturbations based on Perceptual Similarity.
arXiv Detail & Related papers (2020-01-16T15:09:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.