Requiem for a drone: a machine-learning based framework for stealthy attacks against unmanned autonomous vehicles
- URL: http://arxiv.org/abs/2407.15003v1
- Date: Sat, 20 Jul 2024 22:58:14 GMT
- Title: Requiem for a drone: a machine-learning based framework for stealthy attacks against unmanned autonomous vehicles
- Authors: Kyo Hyun Kim, Denizhan Kara, Vineetha Paruchuri, Sibin Mohan, Greg Kimberly, Jae Kim, Josh Eckhardt,
- Abstract summary: We present Requiem, a software-only, blackbox approach that exploits modeling errors.
Requiem causes target systems, e.g., unmanned aerial vehicles (UAVs), to significantly deviate from their mission parameters.
Our system achieves this by modifying sensor values, all while avoiding detection by onboard anomaly detectors.
- Score: 1.9897351551988292
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a space of uncertainty in the modeling of vehicular dynamics of autonomous systems due to noise in sensor readings, environmental factors or modeling errors. We present Requiem, a software-only, blackbox approach that exploits this space in a stealthy manner causing target systems, e.g., unmanned aerial vehicles (UAVs), to significantly deviate from their mission parameters. Our system achieves this by modifying sensor values, all while avoiding detection by onboard anomaly detectors (hence, "stealthy"). The Requiem framework uses a combination of multiple deep learning models (that we refer to as "surrogates" and "spoofers") coupled with extensive, realistic simulations on a software-in-the-loop quadrotor UAV system. Requiem makes no assumptions about either the (types of) sensors or the onboard state estimation algorithm(s) -- it works so long as the latter is "learnable". We demonstrate the effectiveness of our system using various attacks across multiple missions as well as multiple sets of statistical analyses. We show that Requiem successfully exploits the modeling errors (i.e., causes significant deviations from planned mission parameters) while remaining stealthy (no detection even after {tens of meters of deviations}) and are generalizable (Requiem has potential to work across different attacks and sensor types).
Related papers
- Sensor Deprivation Attacks for Stealthy UAV Manipulation [51.9034385791934]
Unmanned Aerial Vehicles autonomously perform tasks with the use of state-of-the-art control algorithms.
In this work, we propose a multi-part.
Sensor Deprivation Attacks (SDAs), aiming to stealthily impact.
process control via sensor reconfiguration.
arXiv Detail & Related papers (2024-10-14T23:03:58Z) - Detection and tracking of MAVs using a LiDAR with rosette scanning pattern [2.062195473318468]
This work presents a method for the detection and tracking of MAVs using a novel, low-cost rosette scanning LiDAR on a pan-tilt turret.
The tracking makes it possible to keep the MAV in the center, maximizing the density of 3D points measured on the target by the LiDAR sensor.
arXiv Detail & Related papers (2024-08-16T06:40:20Z) - Reward Finetuning for Faster and More Accurate Unsupervised Object
Discovery [64.41455104593304]
Reinforcement Learning from Human Feedback (RLHF) can improve machine learning models and align them with human preferences.
We propose to adapt similar RL-based methods to unsupervised object discovery.
We demonstrate that our approach is not only more accurate, but also orders of magnitudes faster to train.
arXiv Detail & Related papers (2023-10-29T17:03:12Z) - Explaining RADAR features for detecting spoofing attacks in Connected
Autonomous Vehicles [2.8153045998456188]
Connected autonomous vehicles (CAVs) are anticipated to have built-in AI systems for defending against cyberattacks.
Machine learning (ML) models form the basis of many such AI systems.
We present a model that explains textitcertainty and textituncertainty in sensor input.
arXiv Detail & Related papers (2022-03-01T00:11:46Z) - Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial
Examples Against Traffic Sign Recognition Systems [10.310327880799017]
Adversarial Examples (AEs) can deceive Deep Neural Networks (DNNs)
In this paper, we propose a systematic pipeline to generate robust physical AEs against real-world object detectors.
Experiments show that the physical AEs generated from our pipeline are effective and robust when attacking the YOLO v5 based Traffic Sign Recognition system.
arXiv Detail & Related papers (2022-01-17T03:24:31Z) - Small Object Detection using Deep Learning [0.28675177318965034]
The proposed system consists of a custom deep learning model Tiny YOLOv3, one of the flavors of very fast object detection model You Look Only Once (YOLO) is built and used for detection.
The proposed architecture has shown significantly better performance as compared to the previous YOLO version.
arXiv Detail & Related papers (2022-01-10T09:58:25Z) - DAE : Discriminatory Auto-Encoder for multivariate time-series anomaly
detection in air transportation [68.8204255655161]
We propose a novel anomaly detection model called Discriminatory Auto-Encoder (DAE)
It uses the baseline of a regular LSTM-based auto-encoder but with several decoders, each getting data of a specific flight phase.
Results show that the DAE achieves better results in both accuracy and speed of detection.
arXiv Detail & Related papers (2021-09-08T14:07:55Z) - A Flow Base Bi-path Network for Cross-scene Video Crowd Understanding in
Aerial View [93.23947591795897]
In this paper, we strive to tackle the challenges and automatically understand the crowd from the visual data collected from drones.
To alleviate the background noise generated in cross-scene testing, a double-stream crowd counting model is proposed.
To tackle the crowd density estimation problem under extreme dark environments, we introduce synthetic data generated by game Grand Theft Auto V(GTAV)
arXiv Detail & Related papers (2020-09-29T01:48:24Z) - On Adversarial Examples and Stealth Attacks in Artificial Intelligence
Systems [62.997667081978825]
We present a formal framework for assessing and analyzing two classes of malevolent action towards generic Artificial Intelligence (AI) systems.
The first class involves adversarial examples and concerns the introduction of small perturbations of the input data that cause misclassification.
The second class, introduced here for the first time and named stealth attacks, involves small perturbations to the AI system itself.
arXiv Detail & Related papers (2020-04-09T10:56:53Z) - On-board Deep-learning-based Unmanned Aerial Vehicle Fault Cause
Detection and Identification [6.585891825257162]
We propose novel architectures to detect and classify drone mis-operations based on sensor data.
We validate the proposed deep-learning architectures via simulations and experiments on a real drone.
Our solution is able to detect with over 90% accuracy and classify various types of drone mis-operations.
arXiv Detail & Related papers (2020-04-03T22:46:34Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.