Attacks and Faults Injection in Self-Driving Agents on the Carla
Simulator -- Experience Report
- URL: http://arxiv.org/abs/2202.12991v1
- Date: Fri, 25 Feb 2022 21:46:12 GMT
- Title: Attacks and Faults Injection in Self-Driving Agents on the Carla
Simulator -- Experience Report
- Authors: Niccol\`o Piazzesi, Massimo Hong, Andrea Ceccarelli
- Abstract summary: We report on the injection of adversarial attacks and software faults in a self-driving agent running in a driving simulator.
We show that adversarial attacks and faults injected in the trained agent can lead to erroneous decisions and severely jeopardize safety.
- Score: 1.933681537640272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning applications are acknowledged at the foundation of
autonomous driving, because they are the enabling technology for most driving
tasks. However, the inclusion of trained agents in automotive systems exposes
the vehicle to novel attacks and faults, that can result in safety threats to
the driv-ing tasks. In this paper we report our experimental campaign on the
injection of adversarial attacks and software faults in a self-driving agent
running in a driving simulator. We show that adversarial attacks and faults
injected in the trained agent can lead to erroneous decisions and severely
jeopardize safety. The paper shows a feasible and easily-reproducible approach
based on open source simula-tor and tools, and the results clearly motivate the
need of both protective measures and extensive testing campaigns.
Related papers
- CRASH: Challenging Reinforcement-Learning Based Adversarial Scenarios For Safety Hardening [16.305837225117607]
This paper introduces CRASH - Challenging Reinforcement-learning based Adversarial scenarios for Safety Hardening.
First CRASH can control adversarial Non Player Character (NPC) agents in an AV simulator to automatically induce collisions with the Ego vehicle.
We also propose a novel approach, that we term safety hardening, which iteratively refines the motion planner by simulating improvement scenarios against adversarial agents.
arXiv Detail & Related papers (2024-11-26T00:00:27Z) - A Survey on Adversarial Robustness of LiDAR-based Machine Learning Perception in Autonomous Vehicles [0.0]
This survey focuses on the intersection of Adversarial Machine Learning (AML) and autonomous systems.
We comprehensively explore the threat landscape, encompassing cyber-attacks on sensors and adversarial perturbations.
This paper endeavors to present a concise overview of the challenges and advances in securing autonomous driving systems against adversarial threats.
arXiv Detail & Related papers (2024-11-21T01:26:52Z) - Attack End-to-End Autonomous Driving through Module-Wise Noise [4.281151553151594]
In this paper, we conduct comprehensive adversarial security research on the modular end-to-end autonomous driving model.
We thoroughly consider the potential vulnerabilities in the model inference process and design a universal attack scheme through module-wise noise injection.
We conduct large-scale experiments on the full-stack autonomous driving model and demonstrate that our attack method outperforms previous attack methods.
arXiv Detail & Related papers (2024-09-12T02:19:16Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models [53.701148276912406]
Vision-Large-Language-models (VLMs) have great application prospects in autonomous driving.
BadVLMDriver is the first backdoor attack against VLMs for autonomous driving that can be launched in practice using physical objects.
BadVLMDriver achieves a 92% attack success rate in inducing a sudden acceleration when coming across a pedestrian holding a red balloon.
arXiv Detail & Related papers (2024-04-19T14:40:38Z) - CAT: Closed-loop Adversarial Training for Safe End-to-End Driving [54.60865656161679]
Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
arXiv Detail & Related papers (2023-10-19T02:49:31Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Training Adversarial Agents to Exploit Weaknesses in Deep Control
Policies [47.08581439933752]
We propose an automated black box testing framework based on adversarial reinforcement learning.
We show that the proposed framework is able to find weaknesses in both control policies that were not evident during online testing.
arXiv Detail & Related papers (2020-02-27T13:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.