Rapid post-disaster infrastructure damage characterisation enabled by remote sensing and deep learning technologies -- a tiered approach
- URL: http://arxiv.org/abs/2401.17759v4
- Date: Fri, 12 Apr 2024 07:44:25 GMT
- Title: Rapid post-disaster infrastructure damage characterisation enabled by remote sensing and deep learning technologies -- a tiered approach
- Authors: Nadiia Kopiika, Andreas Karavias, Pavlos Krassakis, Zehao Ye, Jelena Ninic, Nataliya Shakhovska, Nikolaos Koukouzas, Sotirios Argyroudis, Stergios-Aristoteles Mitoulis,
- Abstract summary: Transport networks and bridges are systematically targeted during wars and suffer damage during natural disasters.
No methods exist for automated characterisation of damage at multiple scales.
We propose an integrated, multi-scale tiered approach to fill this capability gap.
- Score: 0.4837072536850576
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Critical infrastructure, such as transport networks and bridges, are systematically targeted during wars and suffer damage during extensive natural disasters because it is vital for enabling connectivity and transportation of people and goods, and hence, underpins national and international economic growth. Mass destruction of transport assets, in conjunction with minimal or no accessibility in the wake of natural and anthropogenic disasters, prevents us from delivering rapid recovery and adaptation. As a result, systemic operability is drastically reduced, leading to low levels of resilience. Thus, there is a need for rapid assessment of its condition to allow for informed decision-making for restoration prioritisation. A solution to this challenge is to use technology that enables stand-off observations. Nevertheless, no methods exist for automated characterisation of damage at multiple scales, i.e. regional (e.g., network), asset (e.g., bridges), and structural (e.g., road pavement) scales. We propose a methodology based on an integrated, multi-scale tiered approach to fill this capability gap. In doing so, we demonstrate how automated damage characterisation can be enabled by fit-for-purpose digital technologies. Next, the methodology is applied and validated to a case study in Ukraine that includes 17 bridges, damaged by human targeted interventions. From regional to component scale, we deploy technology to integrate assessments using Sentinel-1 SAR images, crowdsourced information, and high-resolution images for deep learning to facilitate automatic damage detection and characterisation. For the first time, the interferometric coherence difference and semantic segmentation of images were deployed in a tiered multi-scale approach to improve the reliability of damage characterisations at different scales.
Related papers
- PDSR: Efficient UAV Deployment for Swift and Accurate Post-Disaster Search and Rescue [2.367791790578455]
This paper introduces a comprehensive framework for Post-Disaster Search and Rescue (PDSR)
Central to this concept is the rapid deployment of UAV swarms equipped with diverse sensing, communication, and intelligence capabilities.
The proposed framework aims to achieve complete coverage of damaged areas significantly faster than traditional methods.
arXiv Detail & Related papers (2024-10-30T12:46:15Z) - Multi-step feature fusion for natural disaster damage assessment on satellite images [0.0]
We introduce a novel convolutional neural network (CNN) module that performs feature fusion at multiple network levels.
An additional network element - Fuse Module - was proposed to adapt any CNN model to analyze image pairs.
We report over a 3 percentage point increase in the accuracy of the Vision Transformer model.
arXiv Detail & Related papers (2024-10-29T09:47:32Z) - Towards Efficient Disaster Response via Cost-effective Unbiased Class Rate Estimation through Neyman Allocation Stratified Sampling Active Learning [11.697034536189094]
We present an innovative algorithm that constructs Neyman stratified random sampling trees for binary classification.
Our findings demonstrate that our method surpasses both passive and conventional active learning techniques.
It effectively addresses the'sampling bias' challenge in traditional active learning strategies.
arXiv Detail & Related papers (2024-05-28T01:34:35Z) - Robust Disaster Assessment from Aerial Imagery Using Text-to-Image Synthetic Data [66.49494950674402]
We leverage emerging text-to-image generative models in creating large-scale synthetic supervision for the task of damage assessment from aerial images.
We build an efficient and easily scalable pipeline to generate thousands of post-disaster images from low-resource domains.
We validate the strength of our proposed framework under cross-geography domain transfer setting from xBD and SKAI images in both single-source and multi-source settings.
arXiv Detail & Related papers (2024-05-22T16:07:05Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Disentangling the Causes of Plasticity Loss in Neural Networks [55.23250269007988]
We show that loss of plasticity can be decomposed into multiple independent mechanisms.
We show that a combination of layer normalization and weight decay is highly effective at maintaining plasticity in a variety of synthetic nonstationary learning tasks.
arXiv Detail & Related papers (2024-02-29T00:02:33Z) - AI-Based Energy Transportation Safety: Pipeline Radial Threat Estimation
Using Intelligent Sensing System [52.93806509364342]
This paper proposes a radial threat estimation method for energy pipelines based on distributed optical fiber sensing technology.
We introduce a continuous multi-view and multi-domain feature fusion methodology to extract comprehensive signal features.
We incorporate the concept of transfer learning through a pre-trained model, enhancing both recognition accuracy and training efficiency.
arXiv Detail & Related papers (2023-12-18T12:37:35Z) - Interpretability in Convolutional Neural Networks for Building Damage
Classification in Satellite Imagery [0.0]
We use a dataset that includes labeled pre- and post-disaster satellite imagery to assess building damage on a per-building basis.
We train multiple convolutional neural networks (CNNs) to assess building damage on a per-building basis.
Our research seeks to computationally contribute to aiding in this ongoing and growing humanitarian crisis, heightened by anthropogenic climate change.
arXiv Detail & Related papers (2022-01-24T16:55:56Z) - EmergencyNet: Efficient Aerial Image Classification for Drone-Based
Emergency Monitoring Using Atrous Convolutional Feature Fusion [8.634988828030245]
This article focuses on the efficient aerial image classification from on-board a UAV for emergency response/monitoring applications.
A dedicated Aerial Image Database for Emergency Response applications is introduced and a comparative analysis of existing approaches is performed.
A lightweight convolutional neural network architecture is proposed, referred to as EmergencyNet, based on atrous convolutions to process multiresolution features.
arXiv Detail & Related papers (2021-04-28T20:24:10Z) - RescueNet: Joint Building Segmentation and Damage Assessment from
Satellite Imagery [83.49145695899388]
RescueNet is a unified model that can simultaneously segment buildings and assess the damage levels to individual buildings and can be trained end-to-end.
RescueNet is tested on the large scale and diverse xBD dataset and achieves significantly better building segmentation and damage classification performance than previous methods.
arXiv Detail & Related papers (2020-04-15T19:52:09Z) - Real-world Person Re-Identification via Degradation Invariance Learning [111.86722193694462]
Person re-identification (Re-ID) in real-world scenarios usually suffers from various degradation factors, e.g., low-resolution, weak illumination, blurring and adverse weather.
We propose a degradation invariance learning framework for real-world person Re-ID.
By introducing a self-supervised disentangled representation learning strategy, our method is able to simultaneously extract identity-related robust features.
arXiv Detail & Related papers (2020-04-10T07:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.