Bait and Switch: Online Training Data Poisoning of Autonomous Driving
Systems
- URL: http://arxiv.org/abs/2011.04065v2
- Date: Mon, 7 Dec 2020 16:29:50 GMT
- Title: Bait and Switch: Online Training Data Poisoning of Autonomous Driving
Systems
- Authors: Naman Patel, Prashanth Krishnamurthy, Siddharth Garg, Farshad Khorrami
- Abstract summary: We show that by controlling parts of a physical environment in which a pre-trained deep neural network (DNN) is being fine-tuned online, an adversary can launch subtle data poisoning attacks.
- Score: 12.867129768057175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show that by controlling parts of a physical environment in which a
pre-trained deep neural network (DNN) is being fine-tuned online, an adversary
can launch subtle data poisoning attacks that degrade the performance of the
system. While the attack can be applied in general to any perception task, we
consider a DNN based traffic light classifier for an autonomous car that has
been trained in one city and is being fine-tuned online in another city. We
show that by injecting environmental perturbations that do not modify the
traffic lights themselves or ground-truth labels, the adversary can cause the
deep network to learn spurious concepts during the online learning phase. The
attacker can leverage the introduced spurious concepts in the environment to
cause the model's accuracy to degrade during operation; therefore, causing the
system to malfunction.
Related papers
- Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - Adversarial Attack Against Image-Based Localization Neural Networks [1.911678487931003]
We present a proof of concept for adversarially attacking the image-based localization module of an autonomous vehicle.
A database of rendered images allowed us to train a deep neural network that performs a localization task and implement, develop and assess the adversarial pattern.
arXiv Detail & Related papers (2022-10-11T16:58:19Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Efficient Federated Learning with Spike Neural Networks for Traffic Sign
Recognition [70.306089187104]
We introduce powerful Spike Neural Networks (SNNs) into traffic sign recognition for energy-efficient and fast model training.
Numerical results indicate that the proposed federated SNN outperforms traditional federated convolutional neural networks in terms of accuracy, noise immunity, and energy efficiency as well.
arXiv Detail & Related papers (2022-05-28T03:11:48Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Finding Physical Adversarial Examples for Autonomous Driving with Fast
and Differentiable Image Compositing [33.466413757630846]
We propose a scalable approach for finding adversarial modifications of a simulated autonomous driving environment.
Our approach is significantly more scalable and far more effective than a state-of-the-art approach based on Bayesian Optimization.
arXiv Detail & Related papers (2020-10-17T18:35:32Z) - Integrated Traffic Simulation-Prediction System using Neural Networks
with Application to the Los Angeles International Airport Road Network [39.975268616636]
The proposed system includes an optimization-based OD matrix generation method, a Neural Network (NN) model trained to predict OD matrices via the pattern of traffic flow and a microscopic traffic simulator.
We test the proposed system on the road network of the central terminal area (CTA) of the Los Angeles International Airport (LAX)
arXiv Detail & Related papers (2020-08-05T01:41:10Z) - Towards Safer Self-Driving Through Great PAIN (Physically Adversarial
Intelligent Networks) [3.136861161060885]
We introduce a "Physically Adrial Intelligent Network" (PAIN) wherein self-driving vehicles interact aggressively.
We train two agents, a protagonist and an adversary, using dueling double deep Q networks (DDDQNs) with prioritized experience replay.
The trained protagonist becomes more resilient to environmental uncertainty and less prone to corner case failures.
arXiv Detail & Related papers (2020-03-24T05:04:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.