Deploying Rapid Damage Assessments from sUAS Imagery for Disaster Response
- URL: http://arxiv.org/abs/2511.03132v1
- Date: Wed, 05 Nov 2025 02:49:15 GMT
- Title: Deploying Rapid Damage Assessments from sUAS Imagery for Disaster Response
- Authors: Thomas Manzini, Priyankari Perali, Robin R. Murphy,
- Abstract summary: This paper presents the first AI/ML system for automating building damage assessment in uncrewed aerial systems (sUAS) imagery to be deployed operationally during federally declared disasters (Hurricanes Debby and Helene)<n>In response to major disasters, sUAS teams are dispatched to collect imagery of the affected areas to assess damage.<n>At recent disasters, teams collectively delivered between 47GB and 369GB of imagery per day, representing more imagery than can reasonably be transmitted or interpreted by subject matter experts in the disaster scene.<n>To alleviate this data avalanche encountered in practice, computer vision and machine learning techniques are
- Score: 3.441021278275805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents the first AI/ML system for automating building damage assessment in uncrewed aerial systems (sUAS) imagery to be deployed operationally during federally declared disasters (Hurricanes Debby and Helene). In response to major disasters, sUAS teams are dispatched to collect imagery of the affected areas to assess damage; however, at recent disasters, teams collectively delivered between 47GB and 369GB of imagery per day, representing more imagery than can reasonably be transmitted or interpreted by subject matter experts in the disaster scene, thus delaying response efforts. To alleviate this data avalanche encountered in practice, computer vision and machine learning techniques are necessary. While prior work has been deployed to automatically assess damage in satellite imagery, there is no current state of practice for sUAS-based damage assessment systems, as all known work has been confined to academic settings. This work establishes the state of practice via the development and deployment of models for building damage assessment with sUAS imagery. The model development involved training on the largest known dataset of post-disaster sUAS aerial imagery, containing 21,716 building damage labels, and the operational training of 91 disaster practitioners. The best performing model was deployed during the responses to Hurricanes Debby and Helene, where it assessed a combined 415 buildings in approximately 18 minutes. This work contributes documentation of the actual use of AI/ML for damage assessment during a disaster and lessons learned to the benefit of the AI/ML research and user communities.
Related papers
- Structural Damage Detection Using AI Super Resolution and Visual Language Model [0.0]
This study proposes a novel, cost-effective framework that leverages aerial drone footage, an advanced AI-based video super-resolution model, Video Restoration Transformer (VRT), and Gemma3:27b, a 27 billion parameter Visual Language Model (VLM)<n>This integrated system is designed to improve low-resolution disaster footage, identify structural damage, and classify buildings into four damage categories, ranging from no/slight damage to total destruction, along with associated risk levels.<n>The framework achieved a classification accuracy of 84.5%, demonstrating its ability to provide highly accurate results.
arXiv Detail & Related papers (2025-08-23T20:12:06Z) - BRIGHT: A globally distributed multimodal building damage assessment dataset with very-high-resolution for all-weather disaster response [50.76124284445902]
Building damage assessment (BDA) is an essential capability in the aftermath of a disaster to reduce human casualties.<n>Recent research focuses on the development of AI models to achieve accurate mapping of unseen disaster events.<n>We present a BDA dataset using veRy-hIGH-resoluTion optical and SAR imagery (BRIGHT) to support AI-based all-weather disaster response.
arXiv Detail & Related papers (2025-01-10T14:57:18Z) - CRASAR-U-DROIDs: A Large Scale Benchmark Dataset for Building Alignment and Damage Assessment in Georectified sUAS Imagery [0.5699788926464749]
CRASAR-U-DROIDs is the largest labeled dataset of sUAS orthomosaic imagery.
The CRASAR-U-DRIODs dataset consists of fifty-two (52) orthomosaics from ten (10) federally declared disasters.
arXiv Detail & Related papers (2024-07-24T23:39:10Z) - Causality-informed Rapid Post-hurricane Building Damage Detection in
Large Scale from InSAR Imagery [6.331801334141028]
Timely and accurate assessment of hurricane-induced building damage is crucial for effective post-hurricane response and recovery efforts.
Recently, remote sensing technologies provide large-scale optical or Interferometric Synthetic Aperture Radar (InSAR) imagery data immediately after a disastrous event.
These InSAR imageries often contain highly noisy and mixed signals induced by co-occurring or co-located building damage, flood, flood/wind-induced vegetation changes, as well as anthropogenic activities.
This paper introduces an approach for rapid post-hurricane building damage detection from InSAR imagery.
arXiv Detail & Related papers (2023-10-02T18:56:05Z) - Rapid building damage assessment workflow: An implementation for the
2023 Rolling Fork, Mississippi tornado event [4.08156787395201]
This paper introduces a human-in-the-loop workflow for rapidly training building damage assessment models after a natural disaster.
This article details a case study using this workflow, executed in partnership with the American Red Cross during a tornado event in Rolling Fork, Mississippi in March, 2023.
The output from our human-in-the-loop modeling process achieved a precision of 0.86 and recall of 0.80 for damaged buildings when compared to ground truth data collected post-disaster.
arXiv Detail & Related papers (2023-06-21T22:01:12Z) - Detecting Damage Building Using Real-time Crowdsourced Images and
Transfer Learning [53.26496452886417]
This paper presents an automated way to extract the damaged building images after earthquakes from social media platforms such as Twitter.
Using transfer learning and 6500 manually labelled images, we trained a deep learning model to recognize images with damaged buildings in the scene.
The trained model achieved good performance when tested on newly acquired images of earthquakes at different locations and ran in near real-time on Twitter feed after the 2020 M7.0 earthquake in Turkey.
arXiv Detail & Related papers (2021-10-12T06:31:54Z) - Generating Physically-Consistent Satellite Imagery for Climate Visualizations [53.61991820941501]
We train a generative adversarial network to create synthetic satellite imagery of future flooding and reforestation events.
A pure deep learning-based model can generate flood visualizations but hallucinates floods at locations that were not susceptible to flooding.
We publish our code and dataset for segmentation guided image-to-image translation in Earth observation.
arXiv Detail & Related papers (2021-04-10T15:00:15Z) - Assessing out-of-domain generalization for robust building damage
detection [78.6363825307044]
Building damage detection can be automated by applying computer vision techniques to satellite imagery.
Models must be robust to a shift in distribution between disaster imagery available for training and the images of the new event.
We argue that future work should focus on the OOD regime instead.
arXiv Detail & Related papers (2020-11-20T10:30:43Z) - Physics-informed GANs for Coastal Flood Visualization [65.54626149826066]
We create a deep learning pipeline that generates visual satellite images of current and future coastal flooding.
By evaluating the imagery relative to physics-based flood maps, we find that our proposed framework outperforms baseline models in both physical-consistency and photorealism.
While this work focused on the visualization of coastal floods, we envision the creation of a global visualization of how climate change will shape our earth.
arXiv Detail & Related papers (2020-10-16T02:15:34Z) - MSNet: A Multilevel Instance Segmentation Network for Natural Disaster
Damage Assessment in Aerial Videos [74.22132693931145]
We study the problem of efficiently assessing building damage after natural disasters like hurricanes, floods or fires.
The first contribution is a new dataset, consisting of user-generated aerial videos from social media with annotations of instance-level building damage masks.
The second contribution is a new model, namely MSNet, which contains novel region proposal network designs.
arXiv Detail & Related papers (2020-06-30T02:23:05Z) - RescueNet: Joint Building Segmentation and Damage Assessment from
Satellite Imagery [83.49145695899388]
RescueNet is a unified model that can simultaneously segment buildings and assess the damage levels to individual buildings and can be trained end-to-end.
RescueNet is tested on the large scale and diverse xBD dataset and achieves significantly better building segmentation and damage classification performance than previous methods.
arXiv Detail & Related papers (2020-04-15T19:52:09Z) - Rapid Damage Assessment Using Social Media Images by Combining Human and
Machine Intelligence [5.610924570214424]
This work analyzes the usefulness of social media imagery content to perform rapid damage assessment during a real-world disaster.
An automatic image processing system processed 280K images to understand the extent of damage caused by the disaster.
arXiv Detail & Related papers (2020-04-14T17:26:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.