ML-driven Malware that Targets AV Safety
- URL: http://arxiv.org/abs/2004.13004v2
- Date: Sat, 13 Jun 2020 01:42:56 GMT
- Title: ML-driven Malware that Targets AV Safety
- Authors: Saurabh Jha, Shengkun Cui, Subho S. Banerjee, Timothy Tsai, Zbigniew
Kalbarczyk, Ravi Iyer
- Abstract summary: We introduce an attack model, a method to deploy the attack in the form of smart malware, and an experimental evaluation of its impact on production-grade autonomous driving software.
We find that determining the time interval during which to launch the attack is critically important for causing safety hazards with a high degree of success.
For example, the smart malware caused 33X more forced emergency braking than random attacks did, and accidents in 52.6% of the driving simulations.
- Score: 5.675697880379048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensuring the safety of autonomous vehicles (AVs) is critical for their mass
deployment and public adoption. However, security attacks that violate safety
constraints and cause accidents are a significant deterrent to achieving public
trust in AVs, and that hinders a vendor's ability to deploy AVs. Creating a
security hazard that results in a severe safety compromise (for example, an
accident) is compelling from an attacker's perspective. In this paper, we
introduce an attack model, a method to deploy the attack in the form of smart
malware, and an experimental evaluation of its impact on production-grade
autonomous driving software. We find that determining the time interval during
which to launch the attack is{ critically} important for causing safety hazards
(such as collisions) with a high degree of success. For example, the smart
malware caused 33X more forced emergency braking than random attacks did, and
accidents in 52.6% of the driving simulations.
Related papers
- CRASH: Challenging Reinforcement-Learning Based Adversarial Scenarios For Safety Hardening [16.305837225117607]
This paper introduces CRASH - Challenging Reinforcement-learning based Adversarial scenarios for Safety Hardening.
First CRASH can control adversarial Non Player Character (NPC) agents in an AV simulator to automatically induce collisions with the Ego vehicle.
We also propose a novel approach, that we term safety hardening, which iteratively refines the motion planner by simulating improvement scenarios against adversarial agents.
arXiv Detail & Related papers (2024-11-26T00:00:27Z) - Defining and Evaluating Physical Safety for Large Language Models [62.4971588282174]
Large Language Models (LLMs) are increasingly used to control robotic systems such as drones.
Their risks of causing physical threats and harm in real-world applications remain unexplored.
We classify the physical safety risks of drones into four categories: (1) human-targeted threats, (2) object-targeted threats, (3) infrastructure attacks, and (4) regulatory violations.
arXiv Detail & Related papers (2024-11-04T17:41:25Z) - SafeAligner: Safety Alignment against Jailbreak Attacks via Response Disparity Guidance [48.80398992974831]
SafeAligner is a methodology implemented at the decoding stage to fortify defenses against jailbreak attacks.
We develop two specialized models: the Sentinel Model, which is trained to foster safety, and the Intruder Model, designed to generate riskier responses.
We show that SafeAligner can increase the likelihood of beneficial tokens, while reducing the occurrence of harmful ones.
arXiv Detail & Related papers (2024-06-26T07:15:44Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models [53.701148276912406]
Vision-Large-Language-models (VLMs) have great application prospects in autonomous driving.
BadVLMDriver is the first backdoor attack against VLMs for autonomous driving that can be launched in practice using physical objects.
BadVLMDriver achieves a 92% attack success rate in inducing a sudden acceleration when coming across a pedestrian holding a red balloon.
arXiv Detail & Related papers (2024-04-19T14:40:38Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Watch Out for the Safety-Threatening Actors: Proactively Mitigating
Safety Hazards [5.898210877584262]
We propose a safety threat indicator (STI) using counterfactual reasoning to estimate the importance of each actor on the road with respect to its influence on the AV's safety.
Our approach reduces the accident rate for the state-of-the-art AV agent(s) in rare hazardous scenarios by more than 70%.
arXiv Detail & Related papers (2022-06-02T05:56:25Z) - Attacks and Faults Injection in Self-Driving Agents on the Carla
Simulator -- Experience Report [1.933681537640272]
We report on the injection of adversarial attacks and software faults in a self-driving agent running in a driving simulator.
We show that adversarial attacks and faults injected in the trained agent can lead to erroneous decisions and severely jeopardize safety.
arXiv Detail & Related papers (2022-02-25T21:46:12Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Dirty Road Can Attack: Security of Deep Learning based Automated Lane
Centering under Physical-World Attack [38.3805893581568]
We study the security of state-of-the-art deep learning based ALC systems under physical-world adversarial attacks.
We formulate the problem with a safety-critical attack goal, and a novel and domain-specific attack vector: dirty road patches.
We evaluate our attack on a production ALC using 80 scenarios from real-world driving traces.
arXiv Detail & Related papers (2020-09-14T19:22:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.