Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement
Learning-based Traffic Congestion Control Systems
- URL: http://arxiv.org/abs/2003.07859v4
- Date: Thu, 26 Aug 2021 10:42:36 GMT
- Title: Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement
Learning-based Traffic Congestion Control Systems
- Authors: Yue Wang, Esha Sarkar, Wenqing Li, Michail Maniatakos, Saif Eddin
Jabari
- Abstract summary: We explore the backdooring/trojanning of DRL-based AV controllers.
Malicious actions include vehicle deceleration and acceleration to cause stop-and-go traffic waves to emerge.
Experiments show that the backdoored model does not compromise normal operation performance.
- Score: 16.01681914880077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has shown that the introduction of autonomous vehicles (AVs) in
traffic could help reduce traffic jams. Deep reinforcement learning methods
demonstrate good performance in complex control problems, including autonomous
vehicle control, and have been used in state-of-the-art AV controllers.
However, deep neural networks (DNNs) render automated driving vulnerable to
machine learning-based attacks. In this work, we explore the
backdooring/trojanning of DRL-based AV controllers. We develop a trigger design
methodology that is based on well-established principles of traffic physics.
The malicious actions include vehicle deceleration and acceleration to cause
stop-and-go traffic waves to emerge (congestion attacks) or AV acceleration
resulting in the AV crashing into the vehicle in front (insurance attack). We
test our attack on single-lane and two-lane circuits. Our experimental results
show that the backdoored model does not compromise normal operation
performance, with the maximum decrease in cumulative rewards being 1%. Still,
it can be maliciously activated to cause a crash or congestion when the
corresponding triggers appear.
Related papers
- Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models [53.701148276912406]
Vision-Large-Language-models (VLMs) have great application prospects in autonomous driving.
BadVLMDriver is the first backdoor attack against VLMs for autonomous driving that can be launched in practice using physical objects.
BadVLMDriver achieves a 92% attack success rate in inducing a sudden acceleration when coming across a pedestrian holding a red balloon.
arXiv Detail & Related papers (2024-04-19T14:40:38Z) - Detecting stealthy cyberattacks on adaptive cruise control vehicles: A
machine learning approach [5.036807309572884]
More insidious attacks, which only slightly alter driving behavior, can result in network-wide increases in congestion, fuel consumption, and even crash risk without being easily detected.
We present a traffic model framework for three types of potential cyberattacks: malicious manipulation of vehicle control commands, false data injection attacks on sensor measurements, and denial-of-service (DoS) attacks.
A novel generative adversarial network (GAN)-based anomaly detection model is proposed for real-time identification of such attacks using vehicle trajectory data.
arXiv Detail & Related papers (2023-10-26T01:22:10Z) - Robust Autonomous Vehicle Pursuit without Expert Steering Labels [41.168074206046164]
We present a learning method for lateral and longitudinal motion control of an ego-vehicle for vehicle pursuit.
The car being controlled does not have a pre-defined route, rather it reactively adapts to follow a target vehicle while maintaining a safety distance.
We extensively validate our approach using the CARLA simulator on a wide range of terrains.
arXiv Detail & Related papers (2023-08-16T14:09:39Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Infrastructure-based End-to-End Learning and Prevention of Driver
Failure [68.0478623315416]
FailureNet is a recurrent neural network trained end-to-end on trajectories of both nominal and reckless drivers in a scaled miniature city.
It can accurately identify control failures, upstream perception errors, and speeding drivers, distinguishing them from nominal driving.
Compared to speed or frequency-based predictors, FailureNet's recurrent neural network structure provides improved predictive power, yielding upwards of 84% accuracy when deployed on hardware.
arXiv Detail & Related papers (2023-03-21T22:55:51Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Few-Shot Backdoor Attacks on Visual Object Tracking [80.13936562708426]
Visual object tracking (VOT) has been widely adopted in mission-critical applications, such as autonomous driving and intelligent surveillance systems.
We show that an adversary can easily implant hidden backdoors into VOT models by tempering with the training process.
We show that our attack is resistant to potential defenses, highlighting the vulnerability of VOT models to potential backdoor attacks.
arXiv Detail & Related papers (2022-01-31T12:38:58Z) - Neural Network Guided Evolutionary Fuzzing for Finding Traffic
Violations of Autonomous Vehicles [15.702721819948623]
Existing testing methods are inadequate for checking the end-to-end behaviors of autonomous vehicles.
We propose a new fuzz testing technique, called AutoFuzz, which can leverage widely-used AV simulators' API grammars.
AutoFuzz efficiently finds hundreds of realistic traffic violations resembling real-world crashes.
arXiv Detail & Related papers (2021-09-13T17:05:43Z) - Dirty Road Can Attack: Security of Deep Learning based Automated Lane
Centering under Physical-World Attack [38.3805893581568]
We study the security of state-of-the-art deep learning based ALC systems under physical-world adversarial attacks.
We formulate the problem with a safety-critical attack goal, and a novel and domain-specific attack vector: dirty road patches.
We evaluate our attack on a production ALC using 80 scenarios from real-world driving traces.
arXiv Detail & Related papers (2020-09-14T19:22:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.