NOMAD: A Natural, Occluded, Multi-scale Aerial Dataset, for Emergency Response Scenarios
- URL: http://arxiv.org/abs/2309.09518v2
- Date: Sat, 07 Dec 2024 05:27:49 GMT
- Title: NOMAD: A Natural, Occluded, Multi-scale Aerial Dataset, for Emergency Response Scenarios
- Authors: Arturo Miguel Russell Bernal, Walter Scheirer, Jane Cleland-Huang,
- Abstract summary: Natural, Occluded, Multi-scale Aerial dataset (NOMAD) is a benchmark dataset for human detection under occluded aerial views.
NOMAD is composed of 100 different Actors, all performing sequences of walking, laying and hiding.
It includes 42,825 frames, extracted from 5.4k resolution videos, and manually annotated with a bounding box and a label describing 10 different visibility levels.
- Score: 41.03292974500013
- License:
- Abstract: With the increasing reliance on small Unmanned Aerial Systems (sUAS) for Emergency Response Scenarios, such as Search and Rescue, the integration of computer vision capabilities has become a key factor in mission success. Nevertheless, computer vision performance for detecting humans severely degrades when shifting from ground to aerial views. Several aerial datasets have been created to mitigate this problem, however, none of them has specifically addressed the issue of occlusion, a critical component in Emergency Response Scenarios. Natural, Occluded, Multi-scale Aerial Dataset (NOMAD) presents a benchmark for human detection under occluded aerial views, with five different aerial distances and rich imagery variance. NOMAD is composed of 100 different Actors, all performing sequences of walking, laying and hiding. It includes 42,825 frames, extracted from 5.4k resolution videos, and manually annotated with a bounding box and a label describing 10 different visibility levels, categorized according to the percentage of the human body visible inside the bounding box. This allows computer vision models to be evaluated on their detection performance across different ranges of occlusion. NOMAD is designed to improve the effectiveness of aerial search and rescue and to enhance collaboration between sUAS and humans, by providing a new benchmark dataset for human detection under occluded aerial views. Full dataset can be found at: https://github.com/ArtRuss/NOMAD.
Related papers
- Psych-Occlusion: Using Visual Psychophysics for Aerial Detection of Occluded Persons during Search and Rescue [41.03292974500013]
Small Unmanned Aerial Systems (sUAS) as "eyes in the sky" during Emergency Response (ER) scenarios.
efficient detection of persons from aerial views plays a crucial role in achieving a successful mission outcome.
Performance of Computer Vision (CV) models onboard sUAS substantially degrades under real-life rigorous conditions.
We exemplify the use of our behavioral dataset, Psych-ER, by using its human accuracy data to adapt the loss function of a detection model.
arXiv Detail & Related papers (2024-12-07T06:22:42Z) - MMAUD: A Comprehensive Multi-Modal Anti-UAV Dataset for Modern Miniature
Drone Threats [37.981623262267036]
MMAUD addresses a critical gap in contemporary threat detection methodologies by focusing on drone detection, UAV-type classification, and trajectory estimation.
It offers a unique overhead aerial detection vital for addressing real-world scenarios with higher fidelity than datasets captured on specific vantage points using thermal and RGB.
Our proposed modalities are cost-effective and highly adaptable, allowing users to experiment and implement new UAV threat detection tools.
arXiv Detail & Related papers (2024-02-06T04:57:07Z) - Multiview Aerial Visual Recognition (MAVREC): Can Multi-view Improve
Aerial Visual Perception? [57.77643186237265]
We present Multiview Aerial Visual RECognition or MAVREC, a video dataset where we record synchronized scenes from different perspectives.
MAVREC consists of around 2.5 hours of industry-standard 2.7K resolution video sequences, more than 0.5 million frames, and 1.1 million annotated bounding boxes.
This makes MAVREC the largest ground and aerial-view dataset, and the fourth largest among all drone-based datasets.
arXiv Detail & Related papers (2023-12-07T18:59:14Z) - The State of Aerial Surveillance: A Survey [62.198765910573556]
This paper provides a comprehensive overview of human-centric aerial surveillance tasks from a computer vision and pattern recognition perspective.
The main object of interest is humans, where single or multiple subjects are to be detected, identified, tracked, re-identified and have their behavior analyzed.
arXiv Detail & Related papers (2022-01-09T20:13:27Z) - Rethinking Drone-Based Search and Rescue with Aerial Person Detection [79.76669658740902]
The visual inspection of aerial drone footage is an integral part of land search and rescue (SAR) operations today.
We propose a novel deep learning algorithm to automate this aerial person detection (APD) task.
We present the novel Aerial Inspection RetinaNet (AIR) algorithm as the combination of these contributions.
arXiv Detail & Related papers (2021-11-17T21:48:31Z) - Small or Far Away? Exploiting Deep Super-Resolution and Altitude Data
for Aerial Animal Surveillance [3.8015092217142223]
We show that a holistic attention network based super-resolution approach and a custom-built altitude data exploitation network can increase the detection efficacy in real-world settings.
We evaluate the system on two public, large aerial-capture animal datasets, SAVMAP and AED.
arXiv Detail & Related papers (2021-11-12T17:30:55Z) - A Multi-viewpoint Outdoor Dataset for Human Action Recognition [3.522154868524807]
We present a multi-viewpoint outdoor action recognition dataset collected from YouTube and our own drone.
The dataset consists of 20 dynamic human action classes, 2324 video clips and 503086 frames.
The overall baseline action recognition accuracy is 74.0%.
arXiv Detail & Related papers (2021-10-07T14:50:43Z) - UAV-Human: A Large Benchmark for Human Behavior Understanding with
Unmanned Aerial Vehicles [12.210724541266183]
We propose a new benchmark - UAVHuman - for human behavior understanding with UAVs.
Our dataset contains 67,428 multi-modal video sequences and 119 subjects for action recognition.
We propose a fisheye-based action recognition method that mitigates the distortions in fisheye videos via learning transformations guided by flat RGB videos.
arXiv Detail & Related papers (2021-04-02T08:54:04Z) - AdaFuse: Adaptive Multiview Fusion for Accurate Human Pose Estimation in
the Wild [77.43884383743872]
We present AdaFuse, an adaptive multiview fusion method to enhance the features in occluded views.
We extensively evaluate the approach on three public datasets including Human3.6M, Total Capture and CMU Panoptic.
We also create a large scale synthetic dataset Occlusion-Person, which allows us to perform numerical evaluation on the occluded joints.
arXiv Detail & Related papers (2020-10-26T03:19:46Z) - A Flow Base Bi-path Network for Cross-scene Video Crowd Understanding in
Aerial View [93.23947591795897]
In this paper, we strive to tackle the challenges and automatically understand the crowd from the visual data collected from drones.
To alleviate the background noise generated in cross-scene testing, a double-stream crowd counting model is proposed.
To tackle the crowd density estimation problem under extreme dark environments, we introduce synthetic data generated by game Grand Theft Auto V(GTAV)
arXiv Detail & Related papers (2020-09-29T01:48:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.