Fail-Safe Human Detection for Drones Using a Multi-Modal Curriculum
Learning Approach
- URL: http://arxiv.org/abs/2109.13666v1
- Date: Tue, 28 Sep 2021 12:34:13 GMT
- Title: Fail-Safe Human Detection for Drones Using a Multi-Modal Curriculum
Learning Approach
- Authors: Ali Safa, Tim Verbelen, Ilja Ocket, Andr\'e Bourdoux, Francky
Catthoor, Georges G.E. Gielen
- Abstract summary: We present KUL-UAVSAFE, a first-of-its-kind dataset for the study of safety-critical people detection by drones.
We propose a CNN architecture with cross-fusion highways and introduce a curriculum learning strategy for multi-modal data.
- Score: 1.094245191265935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Drones are currently being explored for safety-critical applications where
human agents are expected to evolve in their vicinity. In such applications,
robust people avoidance must be provided by fusing a number of sensing
modalities in order to avoid collisions. Currently however, people detection
systems used on drones are solely based on standard cameras besides an emerging
number of works discussing the fusion of imaging and event-based cameras. On
the other hand, radar-based systems provide up-most robustness towards
environmental conditions but do not provide complete information on their own
and have mainly been investigated in automotive contexts, not for drones. In
order to enable the fusion of radars with both event-based and standard
cameras, we present KUL-UAVSAFE, a first-of-its-kind dataset for the study of
safety-critical people detection by drones. In addition, we propose a baseline
CNN architecture with cross-fusion highways and introduce a curriculum learning
strategy for multi-modal data termed SAUL, which greatly enhances the
robustness of the system towards hard RGB failures and provides a significant
gain of 15% in peak F1 score compared to the use of BlackIn, previously
proposed for cross-fusion networks. We demonstrate the real-time performance
and feasibility of the approach by implementing the system in an edge-computing
unit. We release our dataset and additional material in the project home page.
Related papers
- Towards Real-Time Fast Unmanned Aerial Vehicle Detection Using Dynamic Vision Sensors [6.03212980984729]
Unmanned Aerial Vehicles (UAVs) are gaining popularity in civil and military applications.
prevention and detection of UAVs are pivotal to guarantee confidentiality and safety.
This paper presents F-UAV-D (Fast Unmanned Aerial Vehicle Detector), an embedded system that enables fast-moving drone detection.
arXiv Detail & Related papers (2024-03-18T15:27:58Z) - Segmentation of Drone Collision Hazards in Airborne RADAR Point Clouds
Using PointNet [0.7067443325368975]
A critical prerequisite for the integration is equipping UAVs with enhanced situational awareness to ensure safe operations.
Our study leverages radar technology for novel end-to-end semantic segmentation of aerial point clouds to simultaneously identify multiple collision hazards.
To our knowledge, this is the first approach addressing simultaneous identification of multiple collision threats in an aerial setting, achieving a robust 94% accuracy.
arXiv Detail & Related papers (2023-11-06T16:04:58Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - VBSF-TLD: Validation-Based Approach for Soft Computing-Inspired Transfer
Learning in Drone Detection [0.0]
This paper presents a transfer-based drone detection scheme, which forms an integral part of a computer vision-based module.
By harnessing the knowledge of pre-trained models from a related domain, transfer learning enables improved results even with limited training data.
Notably, the scheme's effectiveness is highlighted by its IOU-based validation results.
arXiv Detail & Related papers (2023-06-11T22:30:23Z) - DensePose From WiFi [86.61881052177228]
We develop a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions.
Our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches.
arXiv Detail & Related papers (2022-12-31T16:48:43Z) - HuPR: A Benchmark for Human Pose Estimation Using Millimeter Wave Radar [30.51398364813315]
This paper introduces a novel human pose estimation benchmark, Human Pose with Millimeter Wave Radar (HuPR)
This dataset is created using cross-calibrated mmWave radar sensors and a monocular RGB camera for cross-modality training of radar-based human pose estimation.
arXiv Detail & Related papers (2022-10-22T22:28:40Z) - Cross Vision-RF Gait Re-identification with Low-cost RGB-D Cameras and
mmWave Radars [15.662787088335618]
This work studies the problem of cross-modal human re-identification (ReID)
We propose the first-of-its-kind vision-RF system for cross-modal multi-person ReID at the same time.
Our proposed system is able to achieve 92.5% top-1 accuracy and 97.5% top-5 accuracy out of 56 volunteers.
arXiv Detail & Related papers (2022-07-16T10:34:25Z) - Robust Semi-supervised Federated Learning for Images Automatic
Recognition in Internet of Drones [57.468730437381076]
We present a Semi-supervised Federated Learning (SSFL) framework for privacy-preserving UAV image recognition.
There are significant differences in the number, features, and distribution of local data collected by UAVs using different camera modules.
We propose an aggregation rule based on the frequency of the client's participation in training, namely the FedFreq aggregation rule.
arXiv Detail & Related papers (2022-01-03T16:49:33Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - Perceiving Traffic from Aerial Images [86.994032967469]
We propose an object detection method called Butterfly Detector that is tailored to detect objects in aerial images.
We evaluate our Butterfly Detector on two publicly available UAV datasets (UAVDT and VisDrone 2019) and show that it outperforms previous state-of-the-art methods while remaining real-time.
arXiv Detail & Related papers (2020-09-16T11:37:43Z) - Drone-based RGB-Infrared Cross-Modality Vehicle Detection via
Uncertainty-Aware Learning [59.19469551774703]
Drone-based vehicle detection aims at finding the vehicle locations and categories in an aerial image.
We construct a large-scale drone-based RGB-Infrared vehicle detection dataset, termed DroneVehicle.
Our DroneVehicle collects 28, 439 RGB-Infrared image pairs, covering urban roads, residential areas, parking lots, and other scenarios from day to night.
arXiv Detail & Related papers (2020-03-05T05:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.