On-board Deep-learning-based Unmanned Aerial Vehicle Fault Cause
Detection and Identification
- URL: http://arxiv.org/abs/2005.00336v2
- Date: Wed, 6 May 2020 18:55:28 GMT
- Title: On-board Deep-learning-based Unmanned Aerial Vehicle Fault Cause
Detection and Identification
- Authors: Vidyasagar Sadhu, Saman Zonouz, Dario Pompili
- Abstract summary: We propose novel architectures to detect and classify drone mis-operations based on sensor data.
We validate the proposed deep-learning architectures via simulations and experiments on a real drone.
Our solution is able to detect with over 90% accuracy and classify various types of drone mis-operations.
- Score: 6.585891825257162
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increase in use of Unmanned Aerial Vehicles (UAVs)/drones, it is
important to detect and identify causes of failure in real time for proper
recovery from a potential crash-like scenario or post incident forensics
analysis. The cause of crash could be either a fault in the sensor/actuator
system, a physical damage/attack, or a cyber attack on the drone's software. In
this paper, we propose novel architectures based on deep Convolutional and Long
Short-Term Memory Neural Networks (CNNs and LSTMs) to detect (via Autoencoder)
and classify drone mis-operations based on sensor data. The proposed
architectures are able to learn high-level features automatically from the raw
sensor data and learn the spatial and temporal dynamics in the sensor data. We
validate the proposed deep-learning architectures via simulations and
experiments on a real drone. Empirical results show that our solution is able
to detect with over 90% accuracy and classify various types of drone
mis-operations (with about 99% accuracy (simulation data) and upto 88% accuracy
(experimental data)).
Related papers
- Drone Detection using Deep Neural Networks Trained on Pure Synthetic Data [0.4369058206183195]
We present a drone detection Faster-RCNN model trained on a purely synthetic dataset that transfers to real-world data.
Our results show that using synthetic data for drone detection has the potential to reduce data collection costs and improve labelling quality.
arXiv Detail & Related papers (2024-11-13T23:09:53Z) - Learning 3D Perception from Others' Predictions [64.09115694891679]
We investigate a new scenario to construct 3D object detectors: learning from the predictions of a nearby unit that is equipped with an accurate detector.
For example, when a self-driving car enters a new area, it may learn from other traffic participants whose detectors have been optimized for that area.
arXiv Detail & Related papers (2024-10-03T16:31:28Z) - DroBoost: An Intelligent Score and Model Boosting Method for Drone Detection [1.2564343689544843]
Drone detection is a challenging object detection task where visibility conditions and quality of the images may be unfavorable.
Our work improves on the previous approach by combining several improvements.
The proposed technique won 1st Place in the Drone vs. Bird Challenge.
arXiv Detail & Related papers (2024-06-30T20:49:56Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Track Boosting and Synthetic Data Aided Drone Detection [0.0]
Our method approaches the drone detection problem by fine-tuning a YOLOv5 model with real and synthetically generated data.
Our results indicate that augmenting the real data with an optimal subset of synthetic data can increase the performance.
arXiv Detail & Related papers (2021-11-24T10:16:27Z) - DAE : Discriminatory Auto-Encoder for multivariate time-series anomaly
detection in air transportation [68.8204255655161]
We propose a novel anomaly detection model called Discriminatory Auto-Encoder (DAE)
It uses the baseline of a regular LSTM-based auto-encoder but with several decoders, each getting data of a specific flight phase.
Results show that the DAE achieves better results in both accuracy and speed of detection.
arXiv Detail & Related papers (2021-09-08T14:07:55Z) - Scarce Data Driven Deep Learning of Drones via Generalized Data
Distribution Space [12.377024173799631]
We show how understanding the general distribution of the drone data via a Generative Adversarial Network (GAN) can allow us to acquire missing data to achieve rapid and more accurate learning.
We demonstrate our results on a drone image dataset, which contains both real drone images as well as simulated images from computer-aided design.
arXiv Detail & Related papers (2021-08-18T17:07:32Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z) - Contextual-Bandit Anomaly Detection for IoT Data in Distributed
Hierarchical Edge Computing [65.78881372074983]
IoT devices can hardly afford complex deep neural networks (DNN) models, and offloading anomaly detection tasks to the cloud incurs long delay.
We propose and build a demo for an adaptive anomaly detection approach for distributed hierarchical edge computing (HEC) systems.
We show that our proposed approach significantly reduces detection delay without sacrificing accuracy, as compared to offloading detection tasks to the cloud.
arXiv Detail & Related papers (2020-04-15T06:13:33Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.