Learning Image Attacks toward Vision Guided Autonomous Vehicles
- URL: http://arxiv.org/abs/2105.03834v1
- Date: Sun, 9 May 2021 04:34:10 GMT
- Title: Learning Image Attacks toward Vision Guided Autonomous Vehicles
- Authors: Hyung-Jin Yoon, Hamid Jafarnejad Sani, Petros Voulgaris
- Abstract summary: This paper presents an online adversarial machine learning framework that can effectively misguide autonomous vehicles' missions.
A generative neural network is trained over a set of image frames to obtain an attack policy that is more robust to dynamic and uncertain environments.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While adversarial neural networks have been shown successful for static image
attacks, very few approaches have been developed for attacking online image
streams while taking into account the underlying physical dynamics of
autonomous vehicles, their mission, and environment. This paper presents an
online adversarial machine learning framework that can effectively misguide
autonomous vehicles' missions. In the existing image attack methods devised
toward autonomous vehicles, optimization steps are repeated for every image
frame. This framework removes the need for fully converged optimization at
every frame to realize image attacks in real-time. Using reinforcement
learning, a generative neural network is trained over a set of image frames to
obtain an attack policy that is more robust to dynamic and uncertain
environments. A state estimator is introduced for processing image streams to
reduce the attack policy's sensitivity to physical variables such as unknown
position and velocity. A simulation study is provided to validate the results.
Related papers
- Dynamic Adversarial Attacks on Autonomous Driving Systems [16.657485186920102]
This paper introduces an attacking mechanism to challenge the resilience of autonomous driving systems.
We manipulate the decision-making processes of an autonomous vehicle by dynamically displaying adversarial patches on a screen mounted on another moving vehicle.
Our experiments demonstrate the first successful implementation of such dynamic adversarial attacks in real-world autonomous driving scenarios.
arXiv Detail & Related papers (2023-12-10T04:14:56Z) - Learning When to Use Adaptive Adversarial Image Perturbations against
Autonomous Vehicles [0.0]
Deep neural network (DNN) models for object detection are susceptible to adversarial image perturbations.
We propose a multi-level optimization framework that monitors an attacker's capability of generating the adversarial perturbations.
We show our method's capability to generate the image attack in real-time while monitoring when the attacker is proficient given state estimates.
arXiv Detail & Related papers (2022-12-28T02:36:58Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Preemptive Image Robustification for Protecting Users against
Man-in-the-Middle Adversarial Attacks [16.017328736786922]
A Man-in-the-Middle adversary maliciously intercepts and perturbs images web users upload online.
This type of attack can raise severe ethical concerns on top of simple performance degradation.
We devise a novel bi-level optimization algorithm that finds points in the vicinity of natural images that are robust to adversarial perturbations.
arXiv Detail & Related papers (2021-12-10T16:06:03Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Robust SleepNets [7.23389716633927]
In this study, we investigate eye closedness detection to prevent vehicle accidents related to driver disengagements and driver drowsiness.
We develop two models to detect eye closedness: first model on eye images and a second model on face images.
We adversarially attack the models with Projected Gradient Descent, Fast Gradient Sign and DeepFool methods and report adversarial success rate.
arXiv Detail & Related papers (2021-02-24T20:48:13Z) - Finding Physical Adversarial Examples for Autonomous Driving with Fast
and Differentiable Image Compositing [33.466413757630846]
We propose a scalable approach for finding adversarial modifications of a simulated autonomous driving environment.
Our approach is significantly more scalable and far more effective than a state-of-the-art approach based on Bayesian Optimization.
arXiv Detail & Related papers (2020-10-17T18:35:32Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.