Attacks and Faults Injection in Self-Driving Agents on the Carla
Simulator -- Experience Report
- URL: http://arxiv.org/abs/2202.12991v1
- Date: Fri, 25 Feb 2022 21:46:12 GMT
- Title: Attacks and Faults Injection in Self-Driving Agents on the Carla
Simulator -- Experience Report
- Authors: Niccol\`o Piazzesi, Massimo Hong, Andrea Ceccarelli
- Abstract summary: We report on the injection of adversarial attacks and software faults in a self-driving agent running in a driving simulator.
We show that adversarial attacks and faults injected in the trained agent can lead to erroneous decisions and severely jeopardize safety.
- Score: 1.933681537640272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning applications are acknowledged at the foundation of
autonomous driving, because they are the enabling technology for most driving
tasks. However, the inclusion of trained agents in automotive systems exposes
the vehicle to novel attacks and faults, that can result in safety threats to
the driv-ing tasks. In this paper we report our experimental campaign on the
injection of adversarial attacks and software faults in a self-driving agent
running in a driving simulator. We show that adversarial attacks and faults
injected in the trained agent can lead to erroneous decisions and severely
jeopardize safety. The paper shows a feasible and easily-reproducible approach
based on open source simula-tor and tools, and the results clearly motivate the
need of both protective measures and extensive testing campaigns.
Related papers
- Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models [53.701148276912406]
Vision-Large-Language-models (VLMs) have great application prospects in autonomous driving.
BadVLMDriver is the first backdoor attack against VLMs for autonomous driving that can be launched in practice using physical objects.
BadVLMDriver achieves a 92% attack success rate in inducing a sudden acceleration when coming across a pedestrian holding a red balloon.
arXiv Detail & Related papers (2024-04-19T14:40:38Z) - CAT: Closed-loop Adversarial Training for Safe End-to-End Driving [54.60865656161679]
Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
arXiv Detail & Related papers (2023-10-19T02:49:31Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Multi-Agent Vulnerability Discovery for Autonomous Driving with Hazard
Arbitration Reward [21.627246586543542]
This work proposes a Safety Test framework by finding Av-Responsible Scenarios (STARS) based on multi-agent reinforcement learning.
STARS guides other traffic participants to produce Av-Responsible Scenarios and make the under-test driving policy misbehave.
arXiv Detail & Related papers (2021-12-12T08:58:32Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and
Defenses [13.161104978510943]
This survey provides a thorough analysis of different attacks that may jeopardize autonomous driving systems.
It covers adversarial attacks for various deep learning models and attacks in both physical and cyber context.
Some promising research directions are suggested in order to improve deep learning-based autonomous driving safety.
arXiv Detail & Related papers (2021-04-05T06:31:47Z) - Training Adversarial Agents to Exploit Weaknesses in Deep Control
Policies [47.08581439933752]
We propose an automated black box testing framework based on adversarial reinforcement learning.
We show that the proposed framework is able to find weaknesses in both control policies that were not evident during online testing.
arXiv Detail & Related papers (2020-02-27T13:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.