RescueNet: Joint Building Segmentation and Damage Assessment from
Satellite Imagery
- URL: http://arxiv.org/abs/2004.07312v1
- Date: Wed, 15 Apr 2020 19:52:09 GMT
- Title: RescueNet: Joint Building Segmentation and Damage Assessment from
Satellite Imagery
- Authors: Rohit Gupta and Mubarak Shah
- Abstract summary: RescueNet is a unified model that can simultaneously segment buildings and assess the damage levels to individual buildings and can be trained end-to-end.
RescueNet is tested on the large scale and diverse xBD dataset and achieves significantly better building segmentation and damage classification performance than previous methods.
- Score: 83.49145695899388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate and fine-grained information about the extent of damage to buildings
is essential for directing Humanitarian Aid and Disaster Response (HADR)
operations in the immediate aftermath of any natural calamity. In recent years,
satellite and UAV (drone) imagery has been used for this purpose, sometimes
aided by computer vision algorithms. Existing Computer Vision approaches for
building damage assessment typically rely on a two stage approach, consisting
of building detection using an object detection model, followed by damage
assessment through classification of the detected building tiles. These
multi-stage methods are not end-to-end trainable, and suffer from poor overall
results. We propose RescueNet, a unified model that can simultaneously segment
buildings and assess the damage levels to individual buildings and can be
trained end-toend. In order to to model the composite nature of this problem,
we propose a novel localization aware loss function, which consists of a Binary
Cross Entropy loss for building segmentation, and a foreground only selective
Categorical Cross-Entropy loss for damage classification, and show significant
improvement over the widely used Cross-Entropy loss. RescueNet is tested on the
large scale and diverse xBD dataset and achieves significantly better building
segmentation and damage classification performance than previous methods and
achieves generalization across varied geographical regions and disaster types.
Related papers
- SINDER: Repairing the Singular Defects of DINOv2 [61.98878352956125]
Vision Transformer models trained on large-scale datasets often exhibit artifacts in the patch token they extract.
We propose a novel fine-tuning smooth regularization that rectifies structural deficiencies using only a small dataset.
arXiv Detail & Related papers (2024-07-23T20:34:23Z) - DeepDamageNet: A two-step deep-learning model for multi-disaster building damage segmentation and classification using satellite imagery [12.869300064524122]
We present a solution that performs the two most important tasks in building damage assessment, segmentation and classification, through deep-learning models.
Our best model couples a building identification semantic segmentation convolutional neural network (CNN) to a building damage classification CNN, with a combined F1 score of 0.66.
We find that though our model was able to identify buildings with relatively high accuracy, building damage classification across various disaster types is a difficult task.
arXiv Detail & Related papers (2024-05-08T04:21:03Z) - Visual Context-Aware Person Fall Detection [52.49277799455569]
We present a segmentation pipeline to semi-automatically separate individuals and objects in images.
Background objects such as beds, chairs, or wheelchairs can challenge fall detection systems, leading to false positive alarms.
We demonstrate that object-specific contextual transformations during training effectively mitigate this challenge.
arXiv Detail & Related papers (2024-04-11T19:06:36Z) - Classification of structural building damage grades from multi-temporal
photogrammetric point clouds using a machine learning model trained on
virtual laser scanning data [58.720142291102135]
We present a novel approach to automatically assess multi-class building damage from real-world point clouds.
We use a machine learning model trained on virtual laser scanning (VLS) data.
The model yields high multi-target classification accuracies (overall accuracy: 92.0% - 95.1%)
arXiv Detail & Related papers (2023-02-24T12:04:46Z) - Towards Cross-Disaster Building Damage Assessment with Graph
Convolutional Networks [1.9087335681007478]
In the aftermath of disasters, building damage maps are obtained using change detection to plan rescue operations.
Current convolutional neural network approaches do not consider the similarities between neighboring buildings for predicting the damage.
We present a novel graph-based building damage detection solution to capture these relationships.
arXiv Detail & Related papers (2022-01-25T15:25:21Z) - Interpretability in Convolutional Neural Networks for Building Damage
Classification in Satellite Imagery [0.0]
We use a dataset that includes labeled pre- and post-disaster satellite imagery to assess building damage on a per-building basis.
We train multiple convolutional neural networks (CNNs) to assess building damage on a per-building basis.
Our research seeks to computationally contribute to aiding in this ongoing and growing humanitarian crisis, heightened by anthropogenic climate change.
arXiv Detail & Related papers (2022-01-24T16:55:56Z) - Fully convolutional Siamese neural networks for buildings damage
assessment from satellite images [1.90365714903665]
Damage assessment after natural disasters is needed to distribute aid and forces to recovery from damage dealt optimally.
We develop a computational approach for an automated comparison of the same region's satellite images before and after the disaster.
We include an extensive ablation study and compare different encoders, decoders, loss functions, augmentations, and several methods to combine two images.
arXiv Detail & Related papers (2021-10-31T14:18:59Z) - Assessing out-of-domain generalization for robust building damage
detection [78.6363825307044]
Building damage detection can be automated by applying computer vision techniques to satellite imagery.
Models must be robust to a shift in distribution between disaster imagery available for training and the images of the new event.
We argue that future work should focus on the OOD regime instead.
arXiv Detail & Related papers (2020-11-20T10:30:43Z) - Learning from Multimodal and Multitemporal Earth Observation Data for
Building Damage Mapping [17.324397643429638]
We have developed a global multisensor and multitemporal dataset for building damage mapping.
The global dataset contains high-resolution optical imagery and high-to-moderate-resolution multiband SAR data.
We defined a damage mapping framework for the semantic segmentation of damaged buildings based on a deep convolutional neural network algorithm.
arXiv Detail & Related papers (2020-09-14T05:04:19Z) - MSNet: A Multilevel Instance Segmentation Network for Natural Disaster
Damage Assessment in Aerial Videos [74.22132693931145]
We study the problem of efficiently assessing building damage after natural disasters like hurricanes, floods or fires.
The first contribution is a new dataset, consisting of user-generated aerial videos from social media with annotations of instance-level building damage masks.
The second contribution is a new model, namely MSNet, which contains novel region proposal network designs.
arXiv Detail & Related papers (2020-06-30T02:23:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.