A Survey on Adversarial Robustness of LiDAR-based Machine Learning Perception in Autonomous Vehicles
- URL: http://arxiv.org/abs/2411.13778v1
- Date: Thu, 21 Nov 2024 01:26:52 GMT
- Title: A Survey on Adversarial Robustness of LiDAR-based Machine Learning Perception in Autonomous Vehicles
- Authors: Junae Kim, Amardeep Kaur,
- Abstract summary: This survey focuses on the intersection of Adversarial Machine Learning (AML) and autonomous systems.
We comprehensively explore the threat landscape, encompassing cyber-attacks on sensors and adversarial perturbations.
This paper endeavors to present a concise overview of the challenges and advances in securing autonomous driving systems against adversarial threats.
- Score: 0.0
- License:
- Abstract: In autonomous driving, the combination of AI and vehicular technology offers great potential. However, this amalgamation comes with vulnerabilities to adversarial attacks. This survey focuses on the intersection of Adversarial Machine Learning (AML) and autonomous systems, with a specific focus on LiDAR-based systems. We comprehensively explore the threat landscape, encompassing cyber-attacks on sensors and adversarial perturbations. Additionally, we investigate defensive strategies employed in countering these threats. This paper endeavors to present a concise overview of the challenges and advances in securing autonomous driving systems against adversarial threats, emphasizing the need for robust defenses to ensure safety and security.
Related papers
- Navigating Threats: A Survey of Physical Adversarial Attacks on LiDAR Perception Systems in Autonomous Vehicles [4.4538254463902645]
LiDAR systems are vulnerable to adversarial attacks, which pose significant challenges to the safety and robustness of autonomous vehicles.
This survey presents a review of the current research landscape on physical adversarial attacks targeting LiDAR-based perception systems.
We identify critical challenges and highlight gaps in existing attacks for LiDAR-based systems.
arXiv Detail & Related papers (2024-09-30T15:50:36Z) - Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Autonomous Vehicles an overview on system, cyber security, risks,
issues, and a way forward [0.0]
This chapter explores the complex realm of autonomous cars, analyzing their fundamental components and operational characteristics.
The primary focus of this investigation lies in the realm of cybersecurity, specifically in the context of autonomous vehicles.
A comprehensive analysis will be conducted to explore various risk management solutions aimed at protecting these vehicles from potential threats.
arXiv Detail & Related papers (2023-09-25T15:19:09Z) - Roadmap for Cybersecurity in Autonomous Vehicles [3.577310844634503]
We discuss major automotive cyber-attacks over the past decade and present state-of-the-art solutions that leverage artificial intelligence (AI)
We propose a roadmap towards building secure autonomous vehicles and highlight key open challenges that need to be addressed.
arXiv Detail & Related papers (2022-01-19T16:42:18Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and
Defenses [13.161104978510943]
This survey provides a thorough analysis of different attacks that may jeopardize autonomous driving systems.
It covers adversarial attacks for various deep learning models and attacks in both physical and cyber context.
Some promising research directions are suggested in order to improve deep learning-based autonomous driving safety.
arXiv Detail & Related papers (2021-04-05T06:31:47Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.