Learning When to Use Adaptive Adversarial Image Perturbations against
Autonomous Vehicles
- URL: http://arxiv.org/abs/2212.13667v1
- Date: Wed, 28 Dec 2022 02:36:58 GMT
- Title: Learning When to Use Adaptive Adversarial Image Perturbations against
Autonomous Vehicles
- Authors: Hyung-Jin Yoon, Hamidreza Jafarnejadsani, Petros Voulgaris
- Abstract summary: Deep neural network (DNN) models for object detection are susceptible to adversarial image perturbations.
We propose a multi-level optimization framework that monitors an attacker's capability of generating the adversarial perturbations.
We show our method's capability to generate the image attack in real-time while monitoring when the attacker is proficient given state estimates.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The deep neural network (DNN) models for object detection using camera images
are widely adopted in autonomous vehicles. However, DNN models are shown to be
susceptible to adversarial image perturbations. In the existing methods of
generating the adversarial image perturbations, optimizations take each
incoming image frame as the decision variable to generate an image
perturbation. Therefore, given a new image, the typically
computationally-expensive optimization needs to start over as there is no
learning between the independent optimizations. Very few approaches have been
developed for attacking online image streams while considering the underlying
physical dynamics of autonomous vehicles, their mission, and the environment.
We propose a multi-level stochastic optimization framework that monitors an
attacker's capability of generating the adversarial perturbations. Based on
this capability level, a binary decision attack/not attack is introduced to
enhance the effectiveness of the attacker. We evaluate our proposed multi-level
image attack framework using simulations for vision-guided autonomous vehicles
and actual tests with a small indoor drone in an office environment. The
results show our method's capability to generate the image attack in real-time
while monitoring when the attacker is proficient given state estimates.
Related papers
- RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - Dynamic Adversarial Attacks on Autonomous Driving Systems [16.657485186920102]
This paper introduces an attacking mechanism to challenge the resilience of autonomous driving systems.
We manipulate the decision-making processes of an autonomous vehicle by dynamically displaying adversarial patches on a screen mounted on another moving vehicle.
Our experiments demonstrate the first successful implementation of such dynamic adversarial attacks in real-world autonomous driving scenarios.
arXiv Detail & Related papers (2023-12-10T04:14:56Z) - Detection of Adversarial Physical Attacks in Time-Series Image Data [12.923271427789267]
We propose VisionGuard* (VG), which couples VG with majority-vote methods, to detect adversarial physical attacks in time-series image data.
This is motivated by autonomous systems applications where images are collected over time using onboard sensors for decision-making purposes.
We have evaluated VG* on videos of both clean and physically attacked traffic signs generated by a state-of-the-art robust physical attack.
arXiv Detail & Related papers (2023-04-27T02:08:13Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Preemptive Image Robustification for Protecting Users against
Man-in-the-Middle Adversarial Attacks [16.017328736786922]
A Man-in-the-Middle adversary maliciously intercepts and perturbs images web users upload online.
This type of attack can raise severe ethical concerns on top of simple performance degradation.
We devise a novel bi-level optimization algorithm that finds points in the vicinity of natural images that are robust to adversarial perturbations.
arXiv Detail & Related papers (2021-12-10T16:06:03Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Learning Image Attacks toward Vision Guided Autonomous Vehicles [0.0]
This paper presents an online adversarial machine learning framework that can effectively misguide autonomous vehicles' missions.
A generative neural network is trained over a set of image frames to obtain an attack policy that is more robust to dynamic and uncertain environments.
arXiv Detail & Related papers (2021-05-09T04:34:10Z) - Error Diffusion Halftoning Against Adversarial Examples [85.11649974840758]
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks into making wrong predictions.
We propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples.
arXiv Detail & Related papers (2021-01-23T07:55:02Z) - Finding Physical Adversarial Examples for Autonomous Driving with Fast
and Differentiable Image Compositing [33.466413757630846]
We propose a scalable approach for finding adversarial modifications of a simulated autonomous driving environment.
Our approach is significantly more scalable and far more effective than a state-of-the-art approach based on Bayesian Optimization.
arXiv Detail & Related papers (2020-10-17T18:35:32Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.