Adversarial Deep Reinforcement Learning for Trustworthy Autonomous
Driving Policies
- URL: http://arxiv.org/abs/2112.11937v1
- Date: Wed, 22 Dec 2021 15:00:16 GMT
- Title: Adversarial Deep Reinforcement Learning for Trustworthy Autonomous
Driving Policies
- Authors: Aizaz Sharif, Dusica Marijan
- Abstract summary: We show that adversarial examples can be used to help autonomous cars improve their deep reinforcement learning policies.
By using a high fidelity urban driving simulation environment and vision-based driving agents, we demonstrate that the autonomous cars retrained using the adversary player noticeably increase the performance of their driving policies.
- Score: 5.254093731341154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep reinforcement learning is widely used to train autonomous cars in a
simulated environment. Still, autonomous cars are well known for being
vulnerable when exposed to adversarial attacks. This raises the question of
whether we can train the adversary as a driving agent for finding failure
scenarios in autonomous cars, and then retrain autonomous cars with new
adversarial inputs to improve their robustness. In this work, we first train
and compare adversarial car policy on two custom reward functions to test the
driving control decision of autonomous cars in a multi-agent setting. Second,
we verify that adversarial examples can be used not only for finding unwanted
autonomous driving behavior, but also for helping autonomous driving cars in
improving their deep reinforcement learning policies. By using a high fidelity
urban driving simulation environment and vision-based driving agents, we
demonstrate that the autonomous cars retrained using the adversary player
noticeably increase the performance of their driving policies in terms of
reducing collision and offroad steering errors.
Related papers
- IGDrivSim: A Benchmark for the Imitation Gap in Autonomous Driving [35.64960921334498]
textbfIGDrivSim is a benchmark built on top of the Waymax simulator.
Our experiments show that this perception gap can hinder the learning of safe and effective driving behaviors.
We show that combining imitation with reinforcement learning, using a simple penalty reward for prohibited behaviors, effectively mitigates these failures.
arXiv Detail & Related papers (2024-11-07T12:28:52Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models [53.701148276912406]
Vision-Large-Language-models (VLMs) have great application prospects in autonomous driving.
BadVLMDriver is the first backdoor attack against VLMs for autonomous driving that can be launched in practice using physical objects.
BadVLMDriver achieves a 92% attack success rate in inducing a sudden acceleration when coming across a pedestrian holding a red balloon.
arXiv Detail & Related papers (2024-04-19T14:40:38Z) - Are you a robot? Detecting Autonomous Vehicles from Behavior Analysis [6.422370188350147]
We present a framework that monitors active vehicles using camera images and state information in order to determine whether vehicles are autonomous.
Essentially, it builds on the cooperation among vehicles, which share their data acquired on the road feeding a machine learning model to identify autonomous cars.
Experiments show it is possible to discriminate the two behaviors by analyzing video clips with an accuracy of 80%, which improves up to 93% when the target state information is available.
arXiv Detail & Related papers (2024-03-14T17:00:29Z) - Scaling Is All You Need: Autonomous Driving with JAX-Accelerated Reinforcement Learning [9.25541290397848]
Reinforcement learning has been demonstrated to outperform even the best humans in complex domains like video games.
We conduct large-scale reinforcement learning experiments for autonomous driving.
Our best performing policy reduces the failure rate by 64% while improving the rate of driving progress by 25% compared to the policies produced by state-of-the-art machine learning for autonomous driving.
arXiv Detail & Related papers (2023-12-23T00:07:06Z) - ReMAV: Reward Modeling of Autonomous Vehicles for Finding Likely Failure
Events [1.84926694477846]
We propose a black-box testing framework that uses offline trajectories first to analyze the existing behavior of autonomous vehicles.
Our experiment shows an increase in 35, 23, 48, and 50% in the occurrences of vehicle collision, road object collision, pedestrian collision, and offroad steering events.
arXiv Detail & Related papers (2023-08-28T13:09:00Z) - Robust Driving Policy Learning with Guided Meta Reinforcement Learning [49.860391298275616]
We introduce an efficient method to train diverse driving policies for social vehicles as a single meta-policy.
By randomizing the interaction-based reward functions of social vehicles, we can generate diverse objectives and efficiently train the meta-policy.
We propose a training strategy to enhance the robustness of the ego vehicle's driving policy using the environment where social vehicles are controlled by the learned meta-policy.
arXiv Detail & Related papers (2023-07-19T17:42:36Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Indy Autonomous Challenge -- Autonomous Race Cars at the Handling Limits [81.22616193933021]
The team TUM Auton-omous Motorsports will participate in the Indy Autonomous Challenge in Octo-ber 2021.
It will benchmark its self-driving software-stack by racing one out of ten autonomous Dallara AV-21 racecars at the Indianapolis Motor Speedway.
It is an ideal testing ground for the development of autonomous driving algorithms capable of mastering the most challenging and rare situations.
arXiv Detail & Related papers (2022-02-08T11:55:05Z) - Driving Tasks Transfer in Deep Reinforcement Learning for
Decision-making of Autonomous Vehicles [6.578495322360851]
This paper constructs a transfer deep reinforcement learning framework to transform the driving tasks in inter-section environments.
The goal of the autonomous ego vehicle (AEV) is to drive through the intersection situation efficiently and safely.
Decision-making strategies related to similar tasks are transferable.
arXiv Detail & Related papers (2020-09-07T17:34:01Z) - Training Adversarial Agents to Exploit Weaknesses in Deep Control
Policies [47.08581439933752]
We propose an automated black box testing framework based on adversarial reinforcement learning.
We show that the proposed framework is able to find weaknesses in both control policies that were not evident during online testing.
arXiv Detail & Related papers (2020-02-27T13:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.