Finding Physical Adversarial Examples for Autonomous Driving with Fast
and Differentiable Image Compositing
- URL: http://arxiv.org/abs/2010.08844v2
- Date: Fri, 11 Jun 2021 00:42:30 GMT
- Title: Finding Physical Adversarial Examples for Autonomous Driving with Fast
and Differentiable Image Compositing
- Authors: Jinghan Yang, Adith Boloor, Ayan Chakrabarti, Xuan Zhang, Yevgeniy
Vorobeychik
- Abstract summary: We propose a scalable approach for finding adversarial modifications of a simulated autonomous driving environment.
Our approach is significantly more scalable and far more effective than a state-of-the-art approach based on Bayesian Optimization.
- Score: 33.466413757630846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is considerable evidence that deep neural networks are vulnerable to
adversarial perturbations applied directly to their digital inputs. However, it
remains an open question whether this translates to vulnerabilities in real
systems. For example, an attack on self-driving cars would in practice entail
modifying the driving environment, which then impacts the video inputs to the
car's controller, thereby indirectly leading to incorrect driving decisions.
Such attacks require accounting for system dynamics and tracking viewpoint
changes. We propose a scalable approach for finding adversarial modifications
of a simulated autonomous driving environment using a differentiable
approximation for the mapping from environmental modifications (rectangles on
the road) to the corresponding video inputs to the controller neural network.
Given the parameters of the rectangles, our proposed differentiable mapping
composites them onto pre-recorded video streams of the original environment,
accounting for geometric and color variations. Moreover, we propose a multiple
trajectory sampling approach that enables our attacks to be robust to a car's
self-correcting behavior. When combined with a neural network-based controller,
our approach allows the design of adversarial modifications through end-to-end
gradient-based optimization. Using the Carla autonomous driving simulator, we
show that our approach is significantly more scalable and far more effective at
identifying autonomous vehicle vulnerabilities in simulation experiments than a
state-of-the-art approach based on Bayesian Optimization.
Related papers
- Dynamic Adversarial Attacks on Autonomous Driving Systems [16.657485186920102]
This paper introduces an attacking mechanism to challenge the resilience of autonomous driving systems.
We manipulate the decision-making processes of an autonomous vehicle by dynamically displaying adversarial patches on a screen mounted on another moving vehicle.
Our experiments demonstrate the first successful implementation of such dynamic adversarial attacks in real-world autonomous driving scenarios.
arXiv Detail & Related papers (2023-12-10T04:14:56Z) - Decision-Making for Autonomous Vehicles with Interaction-Aware
Behavioral Prediction and Social-Attention Neural Network [7.812717451846781]
We propose a behavioral model that encodes drivers' interacting intentions into latent social-psychological parameters.
We develop a receding-horizon optimization-based controller for autonomous vehicle decision-making.
We conduct extensive evaluations of the proposed decision-making module, in forced highway merging scenarios.
arXiv Detail & Related papers (2023-10-31T03:31:09Z) - Unsupervised Adaptation from Repeated Traversals for Autonomous Driving [54.59577283226982]
Self-driving cars must generalize to the end-user's environment to operate reliably.
One potential solution is to leverage unlabeled data collected from the end-users' environments.
There is no reliable signal in the target domain to supervise the adaptation process.
We show that this simple additional assumption is sufficient to obtain a potent signal that allows us to perform iterative self-training of 3D object detectors on the target domain.
arXiv Detail & Related papers (2023-03-27T15:07:55Z) - Learning When to Use Adaptive Adversarial Image Perturbations against
Autonomous Vehicles [0.0]
Deep neural network (DNN) models for object detection are susceptible to adversarial image perturbations.
We propose a multi-level optimization framework that monitors an attacker's capability of generating the adversarial perturbations.
We show our method's capability to generate the image attack in real-time while monitoring when the attacker is proficient given state estimates.
arXiv Detail & Related papers (2022-12-28T02:36:58Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Monocular Vision-based Prediction of Cut-in Maneuvers with LSTM Networks [0.0]
This study proposes a method to predict potentially dangerous cut-in maneuvers happening in the ego lane.
We follow a computer vision-based approach that only employs a single in-vehicle RGB camera.
Our algorithm consists of a CNN-based vehicle detection and tracking step and an LSTM-based maneuver classification step.
arXiv Detail & Related papers (2022-03-21T02:30:36Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Detecting Adversarial Examples in Learning-Enabled Cyber-Physical
Systems using Variational Autoencoder for Regression [4.788163807490198]
It has been shown that deep neural networks (DNN) are not robust and adversarial examples can cause the model to make a false prediction.
The paper considers the problem of efficiently detecting adversarial examples in LECs used for regression in CPS.
We demonstrate the method using an advanced emergency braking system implemented in an open source simulator for self-driving cars.
arXiv Detail & Related papers (2020-03-21T11:15:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.