Multi-step feature fusion for natural disaster damage assessment on satellite images
- URL: http://arxiv.org/abs/2410.21901v1
- Date: Tue, 29 Oct 2024 09:47:32 GMT
- Title: Multi-step feature fusion for natural disaster damage assessment on satellite images
- Authors: Mateusz Żarski, Jarosław Adam Miszczak,
- Abstract summary: We introduce a novel convolutional neural network (CNN) module that performs feature fusion at multiple network levels.
An additional network element - Fuse Module - was proposed to adapt any CNN model to analyze image pairs.
We report over a 3 percentage point increase in the accuracy of the Vision Transformer model.
- Score: 0.0
- License:
- Abstract: Quick and accurate assessment of the damage state of buildings after natural disasters is crucial for undertaking properly targeted rescue and subsequent recovery operations, which can have a major impact on the safety of victims and the cost of disaster recovery. The quality of such a process can be significantly improved by harnessing the potential of machine learning methods in computer vision. This paper presents a novel damage assessment method using an original multi-step feature fusion network for the classification of the damage state of buildings based on pre- and post-disaster large-scale satellite images. We introduce a novel convolutional neural network (CNN) module that performs feature fusion at multiple network levels between pre- and post-disaster images in the horizontal and vertical directions of CNN network. An additional network element - Fuse Module - was proposed to adapt any CNN model to analyze image pairs in the issue of pair classification. We use, open, large-scale datasets (IDA-BD and xView2) to verify, that the proposed method is suitable to improve on existing state-of-the-art architectures. We report over a 3 percentage point increase in the accuracy of the Vision Transformer model.
Related papers
- Robust Disaster Assessment from Aerial Imagery Using Text-to-Image Synthetic Data [66.49494950674402]
We leverage emerging text-to-image generative models in creating large-scale synthetic supervision for the task of damage assessment from aerial images.
We build an efficient and easily scalable pipeline to generate thousands of post-disaster images from low-resource domains.
We validate the strength of our proposed framework under cross-geography domain transfer setting from xBD and SKAI images in both single-source and multi-source settings.
arXiv Detail & Related papers (2024-05-22T16:07:05Z) - DeepDamageNet: A two-step deep-learning model for multi-disaster building damage segmentation and classification using satellite imagery [12.869300064524122]
We present a solution that performs the two most important tasks in building damage assessment, segmentation and classification, through deep-learning models.
Our best model couples a building identification semantic segmentation convolutional neural network (CNN) to a building damage classification CNN, with a combined F1 score of 0.66.
We find that though our model was able to identify buildings with relatively high accuracy, building damage classification across various disaster types is a difficult task.
arXiv Detail & Related papers (2024-05-08T04:21:03Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Classification of structural building damage grades from multi-temporal
photogrammetric point clouds using a machine learning model trained on
virtual laser scanning data [58.720142291102135]
We present a novel approach to automatically assess multi-class building damage from real-world point clouds.
We use a machine learning model trained on virtual laser scanning (VLS) data.
The model yields high multi-target classification accuracies (overall accuracy: 92.0% - 95.1%)
arXiv Detail & Related papers (2023-02-24T12:04:46Z) - DAHiTrA: Damage Assessment Using a Novel Hierarchical Transformer
Architecture [4.162725423624233]
This paper presents DAHiTrA, a novel deep-learning model with hierarchical transformers to classify building damages based on satellite images.
Satellite imagery provides real-time, high-coverage information.
Deep-learning methods have shown to be promising in classifying building damage.
arXiv Detail & Related papers (2022-08-03T16:41:39Z) - Interpretability in Convolutional Neural Networks for Building Damage
Classification in Satellite Imagery [0.0]
We use a dataset that includes labeled pre- and post-disaster satellite imagery to assess building damage on a per-building basis.
We train multiple convolutional neural networks (CNNs) to assess building damage on a per-building basis.
Our research seeks to computationally contribute to aiding in this ongoing and growing humanitarian crisis, heightened by anthropogenic climate change.
arXiv Detail & Related papers (2022-01-24T16:55:56Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - BDANet: Multiscale Convolutional Neural Network with Cross-directional
Attention for Building Damage Assessment from Satellite Images [24.989412626461213]
Building damage assessment from satellite imagery is critical before relief effort is deployed.
Deep neural networks have been successfully applied to building damage assessment.
We propose a novel two-stage convolutional neural network for Building Damage Assessment, called BDANet.
arXiv Detail & Related papers (2021-05-16T06:13:28Z) - MSNet: A Multilevel Instance Segmentation Network for Natural Disaster
Damage Assessment in Aerial Videos [74.22132693931145]
We study the problem of efficiently assessing building damage after natural disasters like hurricanes, floods or fires.
The first contribution is a new dataset, consisting of user-generated aerial videos from social media with annotations of instance-level building damage masks.
The second contribution is a new model, namely MSNet, which contains novel region proposal network designs.
arXiv Detail & Related papers (2020-06-30T02:23:05Z) - RescueNet: Joint Building Segmentation and Damage Assessment from
Satellite Imagery [83.49145695899388]
RescueNet is a unified model that can simultaneously segment buildings and assess the damage levels to individual buildings and can be trained end-to-end.
RescueNet is tested on the large scale and diverse xBD dataset and achieves significantly better building segmentation and damage classification performance than previous methods.
arXiv Detail & Related papers (2020-04-15T19:52:09Z) - An Attention-Based System for Damage Assessment Using Satellite Imagery [18.43310705820528]
We present Siam-U-Net-Attn model - a multi-class deep learning model with an attention mechanism - to assess damage levels of buildings.
We evaluate the proposed method on xView2, a large-scale building damage assessment dataset, and demonstrate that the proposed approach achieves accurate damage scale classification and building segmentation results simultaneously.
arXiv Detail & Related papers (2020-04-14T16:37:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.