CRASAR-U-DROIDs: A Large Scale Benchmark Dataset for Building Alignment and Damage Assessment in Georectified sUAS Imagery
- URL: http://arxiv.org/abs/2407.17673v2
- Date: Mon, 29 Jul 2024 18:12:21 GMT
- Title: CRASAR-U-DROIDs: A Large Scale Benchmark Dataset for Building Alignment and Damage Assessment in Georectified sUAS Imagery
- Authors: Thomas Manzini, Priyankari Perali, Raisa Karnik, Robin Murphy,
- Abstract summary: CRASAR-U-DROIDs is the largest labeled dataset of sUAS orthomosaic imagery.
The CRASAR-U-DRIODs dataset consists of fifty-two (52) orthomosaics from ten (10) federally declared disasters.
- Score: 0.5699788926464749
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This document presents the Center for Robot Assisted Search And Rescue - Uncrewed Aerial Systems - Disaster Response Overhead Inspection Dataset (CRASAR-U-DROIDs) for building damage assessment and spatial alignment collected from small uncrewed aerial systems (sUAS) geospatial imagery. This dataset is motivated by the increasing use of sUAS in disaster response and the lack of previous work in utilizing high-resolution geospatial sUAS imagery for machine learning and computer vision models, the lack of alignment with operational use cases, and with hopes of enabling further investigations between sUAS and satellite imagery. The CRASAR-U-DRIODs dataset consists of fifty-two (52) orthomosaics from ten (10) federally declared disasters (Hurricane Ian, Hurricane Ida, Hurricane Harvey, Hurricane Idalia, Hurricane Laura, Hurricane Michael, Musset Bayou Fire, Mayfield Tornado, Kilauea Eruption, and Champlain Towers Collapse) spanning 67.98 square kilometers (26.245 square miles), containing 21,716 building polygons and damage labels, and 7,880 adjustment annotations. The imagery was tiled and presented in conjunction with overlaid building polygons to a pool of 130 annotators who provided human judgments of damage according to the Joint Damage Scale. These annotations were then reviewed via a two-stage review process in which building polygon damage labels were first reviewed individually and then again by committee. Additionally, the building polygons have been aligned spatially to precisely overlap with the imagery to enable more performant machine learning models to be trained. It appears that CRASAR-U-DRIODs is the largest labeled dataset of sUAS orthomosaic imagery.
Related papers
- Non-Uniform Spatial Alignment Errors in sUAS Imagery From Wide-Area Disasters [0.5013868868152142]
This work presents the first quantitative study of alignment errors between small uncrewed aerial systems (sUAS) geospatial imagery and a priori building polygons.
There are no efforts that have aligned pre-existing spatial data with sUAS imagery, and thus, there is no clear state of practice.
This study identifies and analyzes the translational alignment errors of 21,619 building polygons in fifty-one orthomosaic images, covering 16787.2 Acres (26.23 square miles)
The analysis finds no uniformity among the angle and distance metrics of the building polygon alignments as they present an average degree variance of
arXiv Detail & Related papers (2024-05-10T16:48:44Z) - Post-hurricane building damage assessment using street-view imagery and structured data: A multi-modal deep learning approach [1.748885212343545]
We propose a novel multi-modal approach for post-hurricane building damage classification, named the Multi-Modal Swin Transformer (MMST)
We empirically train and evaluate the proposed MMST using data collected from the 2022 Hurricane Ian in Florida, USA.
Results show that MMST outperforms all selected state-of-the-art benchmark models and can achieve an accuracy of 92.67%.
arXiv Detail & Related papers (2024-04-11T00:23:28Z) - Causality-informed Rapid Post-hurricane Building Damage Detection in
Large Scale from InSAR Imagery [6.331801334141028]
Timely and accurate assessment of hurricane-induced building damage is crucial for effective post-hurricane response and recovery efforts.
Recently, remote sensing technologies provide large-scale optical or Interferometric Synthetic Aperture Radar (InSAR) imagery data immediately after a disastrous event.
These InSAR imageries often contain highly noisy and mixed signals induced by co-occurring or co-located building damage, flood, flood/wind-induced vegetation changes, as well as anthropogenic activities.
This paper introduces an approach for rapid post-hurricane building damage detection from InSAR imagery.
arXiv Detail & Related papers (2023-10-02T18:56:05Z) - A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and
Ground Truth [61.90504318229845]
This paper introduces the first paired real image benchmark dataset with hazy and haze-free images, and in-situ haze density measurements.
This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene.
A subset of this dataset has been used for the Object Detection in Haze Track of CVPR UG2 2022 challenge.
arXiv Detail & Related papers (2022-06-13T19:14:06Z) - Object Detection in Aerial Images: A Large-Scale Benchmark and
Challenges [124.48654341780431]
We present a large-scale dataset of Object deTection in Aerial images (DOTA) and comprehensive baselines for ODAI.
The proposed DOTA dataset contains 1,793,658 object instances of 18 categories of oriented-bounding-box annotations collected from 11,268 aerial images.
We build baselines covering 10 state-of-the-art algorithms with over 70 configurations, where the speed and accuracy performances of each model have been evaluated.
arXiv Detail & Related papers (2021-02-24T11:20:55Z) - Post-Hurricane Damage Assessment Using Satellite Imagery and Geolocation
Features [0.2538209532048866]
We propose a mixed data approach, which leverages publicly available satellite imagery and geolocation features of the affected area to identify damaged buildings after a hurricane.
The method demonstrated significant improvement from performing a similar task using only imagery features, based on a case study of Hurricane Harvey affecting Greater Houston area in 2017.
In this work, a creative choice of the geolocation features was made to provide extra information to the imagery features, but it is up to the users to decide which other features can be included to model the physical behavior of the events, depending on their domain knowledge and the type of disaster.
arXiv Detail & Related papers (2020-12-15T21:30:19Z) - Assessing out-of-domain generalization for robust building damage
detection [78.6363825307044]
Building damage detection can be automated by applying computer vision techniques to satellite imagery.
Models must be robust to a shift in distribution between disaster imagery available for training and the images of the new event.
We argue that future work should focus on the OOD regime instead.
arXiv Detail & Related papers (2020-11-20T10:30:43Z) - Physics-informed GANs for Coastal Flood Visualization [65.54626149826066]
We create a deep learning pipeline that generates visual satellite images of current and future coastal flooding.
By evaluating the imagery relative to physics-based flood maps, we find that our proposed framework outperforms baseline models in both physical-consistency and photorealism.
While this work focused on the visualization of coastal floods, we envision the creation of a global visualization of how climate change will shape our earth.
arXiv Detail & Related papers (2020-10-16T02:15:34Z) - MSNet: A Multilevel Instance Segmentation Network for Natural Disaster
Damage Assessment in Aerial Videos [74.22132693931145]
We study the problem of efficiently assessing building damage after natural disasters like hurricanes, floods or fires.
The first contribution is a new dataset, consisting of user-generated aerial videos from social media with annotations of instance-level building damage masks.
The second contribution is a new model, namely MSNet, which contains novel region proposal network designs.
arXiv Detail & Related papers (2020-06-30T02:23:05Z) - RescueNet: Joint Building Segmentation and Damage Assessment from
Satellite Imagery [83.49145695899388]
RescueNet is a unified model that can simultaneously segment buildings and assess the damage levels to individual buildings and can be trained end-to-end.
RescueNet is tested on the large scale and diverse xBD dataset and achieves significantly better building segmentation and damage classification performance than previous methods.
arXiv Detail & Related papers (2020-04-15T19:52:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.