Learning from Multimodal and Multitemporal Earth Observation Data for
Building Damage Mapping
- URL: http://arxiv.org/abs/2009.06200v1
- Date: Mon, 14 Sep 2020 05:04:19 GMT
- Title: Learning from Multimodal and Multitemporal Earth Observation Data for
Building Damage Mapping
- Authors: Bruno Adriano, Naoto Yokoya, Junshi Xia, Hiroyuki Miura, Wen Liu,
Masashi Matsuoka, Shunichi Koshimura
- Abstract summary: We have developed a global multisensor and multitemporal dataset for building damage mapping.
The global dataset contains high-resolution optical imagery and high-to-moderate-resolution multiband SAR data.
We defined a damage mapping framework for the semantic segmentation of damaged buildings based on a deep convolutional neural network algorithm.
- Score: 17.324397643429638
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Earth observation technologies, such as optical imaging and synthetic
aperture radar (SAR), provide excellent means to monitor ever-growing urban
environments continuously. Notably, in the case of large-scale disasters (e.g.,
tsunamis and earthquakes), in which a response is highly time-critical, images
from both data modalities can complement each other to accurately convey the
full damage condition in the disaster's aftermath. However, due to several
factors, such as weather and satellite coverage, it is often uncertain which
data modality will be the first available for rapid disaster response efforts.
Hence, novel methodologies that can utilize all accessible EO datasets are
essential for disaster management. In this study, we have developed a global
multisensor and multitemporal dataset for building damage mapping. We included
building damage characteristics from three disaster types, namely, earthquakes,
tsunamis, and typhoons, and considered three building damage categories. The
global dataset contains high-resolution optical imagery and
high-to-moderate-resolution multiband SAR data acquired before and after each
disaster. Using this comprehensive dataset, we analyzed five data modality
scenarios for damage mapping: single-mode (optical and SAR datasets),
cross-modal (pre-disaster optical and post-disaster SAR datasets), and mode
fusion scenarios. We defined a damage mapping framework for the semantic
segmentation of damaged buildings based on a deep convolutional neural network
algorithm. We compare our approach to another state-of-the-art baseline model
for damage mapping. The results indicated that our dataset, together with a
deep learning network, enabled acceptable predictions for all the data modality
scenarios.
Related papers
- Generalizable Disaster Damage Assessment via Change Detection with Vision Foundation Model [17.016411785224317]
We present DAVI (Disaster Assessment with VIsion foundation model), which overcomes domain disparities and detects structural damage without requiring ground-truth labels of the target region.
DAVI integrates task-specific knowledge from a model trained on source regions with an image segmentation foundation model to generate pseudo labels of possible damage in the target region.
It then employs a two-stage refinement process, targeting both the pixel and overall image, to more accurately pinpoint changes in disaster-struck areas.
arXiv Detail & Related papers (2024-06-12T09:21:28Z) - QuickQuakeBuildings: Post-earthquake SAR-Optical Dataset for Quick Damaged-building Detection [5.886875818210989]
This letter presents the first dataset dedicated to detecting earthquake-damaged buildings from post-event very high resolution (VHR) Synthetic Aperture Radar (SAR) and optical imagery.
We deliver a dataset of coregistered building footprints and satellite image patches of both SAR and optical data, encompassing more than four thousand buildings.
arXiv Detail & Related papers (2023-12-11T18:19:36Z) - AB2CD: AI for Building Climate Damage Classification and Detection [0.0]
We explore the implementation of deep learning techniques for precise building damage assessment in the context of natural hazards.
We tackle the challenges of generalization to novel disasters and regions while accounting for the influence of low-quality and noisy labels.
Our research findings showcase the potential and limitations of advanced AI solutions in enhancing the impact assessment of climate change-induced extreme weather events.
arXiv Detail & Related papers (2023-09-03T03:37:04Z) - Classification of structural building damage grades from multi-temporal
photogrammetric point clouds using a machine learning model trained on
virtual laser scanning data [58.720142291102135]
We present a novel approach to automatically assess multi-class building damage from real-world point clouds.
We use a machine learning model trained on virtual laser scanning (VLS) data.
The model yields high multi-target classification accuracies (overall accuracy: 92.0% - 95.1%)
arXiv Detail & Related papers (2023-02-24T12:04:46Z) - Building Coverage Estimation with Low-resolution Remote Sensing Imagery [65.95520230761544]
We propose a method for estimating building coverage using only publicly available low-resolution satellite imagery.
Our model achieves a coefficient of determination as high as 0.968 on predicting building coverage in regions of different levels of development around the world.
arXiv Detail & Related papers (2023-01-04T05:19:33Z) - Interpretability in Convolutional Neural Networks for Building Damage
Classification in Satellite Imagery [0.0]
We use a dataset that includes labeled pre- and post-disaster satellite imagery to assess building damage on a per-building basis.
We train multiple convolutional neural networks (CNNs) to assess building damage on a per-building basis.
Our research seeks to computationally contribute to aiding in this ongoing and growing humanitarian crisis, heightened by anthropogenic climate change.
arXiv Detail & Related papers (2022-01-24T16:55:56Z) - Spatial-Temporal Sequential Hypergraph Network for Crime Prediction [56.41899180029119]
We propose Spatial-Temporal Sequential Hypergraph Network (ST-SHN) to collectively encode complex crime spatial-temporal patterns.
In particular, to handle spatial-temporal dynamics under the long-range and global context, we design a graph-structured message passing architecture.
We conduct extensive experiments on two real-world datasets, showing that our proposed ST-SHN framework can significantly improve the prediction performance.
arXiv Detail & Related papers (2022-01-07T12:46:50Z) - Assessing out-of-domain generalization for robust building damage
detection [78.6363825307044]
Building damage detection can be automated by applying computer vision techniques to satellite imagery.
Models must be robust to a shift in distribution between disaster imagery available for training and the images of the new event.
We argue that future work should focus on the OOD regime instead.
arXiv Detail & Related papers (2020-11-20T10:30:43Z) - MSNet: A Multilevel Instance Segmentation Network for Natural Disaster
Damage Assessment in Aerial Videos [74.22132693931145]
We study the problem of efficiently assessing building damage after natural disasters like hurricanes, floods or fires.
The first contribution is a new dataset, consisting of user-generated aerial videos from social media with annotations of instance-level building damage masks.
The second contribution is a new model, namely MSNet, which contains novel region proposal network designs.
arXiv Detail & Related papers (2020-06-30T02:23:05Z) - RescueNet: Joint Building Segmentation and Damage Assessment from
Satellite Imagery [83.49145695899388]
RescueNet is a unified model that can simultaneously segment buildings and assess the damage levels to individual buildings and can be trained end-to-end.
RescueNet is tested on the large scale and diverse xBD dataset and achieves significantly better building segmentation and damage classification performance than previous methods.
arXiv Detail & Related papers (2020-04-15T19:52:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.