Creating A Coefficient of Change in the Built Environment After a
Natural Disaster
- URL: http://arxiv.org/abs/2111.04462v2
- Date: Tue, 9 Nov 2021 20:12:46 GMT
- Title: Creating A Coefficient of Change in the Built Environment After a
Natural Disaster
- Authors: Karla Saldana Ochoa
- Abstract summary: This study proposes a novel method to assess damages in the built environment using a deep learning workflow to quantify it.
Aerial images from before and after a natural disaster of 50 epicenters worldwide were obtained from Google Earth.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study proposes a novel method to assess damages in the built environment
using a deep learning workflow to quantify it. Thanks to an automated crawler,
aerial images from before and after a natural disaster of 50 epicenters
worldwide were obtained from Google Earth, generating a 10,000 aerial image
database with a spatial resolution of 2 m per pixel. The study utilizes the
algorithm Seg-Net to perform semantic segmentation of the built environment
from the satellite images in both instances (prior and post-natural disasters).
For image segmentation, Seg-Net is one of the most popular and general CNN
architectures. The Seg-Net algorithm used reached an accuracy of 92% in the
segmentation. After the segmentation, we compared the disparity between both
cases represented as a percentage of change. Such coefficient of change
represents the damage numerically an urban environment had to quantify the
overall damage in the built environment. Such an index can give the government
an estimate of the number of affected households and perhaps the extent of
housing damage.
Related papers
- Visual Context-Aware Person Fall Detection [52.49277799455569]
We present a segmentation pipeline to semi-automatically separate individuals and objects in images.
Background objects such as beds, chairs, or wheelchairs can challenge fall detection systems, leading to false positive alarms.
We demonstrate that object-specific contextual transformations during training effectively mitigate this challenge.
arXiv Detail & Related papers (2024-04-11T19:06:36Z) - Aerial Lifting: Neural Urban Semantic and Building Instance Lifting from Aerial Imagery [51.73680703579997]
We present a neural radiance field method for urban-scale semantic and building-level instance segmentation from aerial images.
objects in urban aerial images exhibit substantial variations in size, including buildings, cars, and roads.
We introduce a scale-adaptive semantic label fusion strategy that enhances the segmentation of objects of varying sizes.
We then introduce a novel cross-view instance label grouping strategy to mitigate the multi-view inconsistency problem in the 2D instance labels.
arXiv Detail & Related papers (2024-03-18T14:15:39Z) - Transformer-based Flood Scene Segmentation for Developing Countries [1.7499351967216341]
Floods are large-scale natural disasters that often induce a massive number of deaths, extensive material damage, and economic turmoil.
Early Warning Systems (EWS) constantly assess water levels and other factors to forecast floods, to help minimize damage.
FloodTransformer is the first visual transformer-based model to detect and segment flooded areas from aerial images at disaster sites.
arXiv Detail & Related papers (2022-10-09T10:29:41Z) - Fully convolutional Siamese neural networks for buildings damage
assessment from satellite images [1.90365714903665]
Damage assessment after natural disasters is needed to distribute aid and forces to recovery from damage dealt optimally.
We develop a computational approach for an automated comparison of the same region's satellite images before and after the disaster.
We include an extensive ablation study and compare different encoders, decoders, loss functions, augmentations, and several methods to combine two images.
arXiv Detail & Related papers (2021-10-31T14:18:59Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Post-Hurricane Damage Assessment Using Satellite Imagery and Geolocation
Features [0.2538209532048866]
We propose a mixed data approach, which leverages publicly available satellite imagery and geolocation features of the affected area to identify damaged buildings after a hurricane.
The method demonstrated significant improvement from performing a similar task using only imagery features, based on a case study of Hurricane Harvey affecting Greater Houston area in 2017.
In this work, a creative choice of the geolocation features was made to provide extra information to the imagery features, but it is up to the users to decide which other features can be included to model the physical behavior of the events, depending on their domain knowledge and the type of disaster.
arXiv Detail & Related papers (2020-12-15T21:30:19Z) - MSNet: A Multilevel Instance Segmentation Network for Natural Disaster
Damage Assessment in Aerial Videos [74.22132693931145]
We study the problem of efficiently assessing building damage after natural disasters like hurricanes, floods or fires.
The first contribution is a new dataset, consisting of user-generated aerial videos from social media with annotations of instance-level building damage masks.
The second contribution is a new model, namely MSNet, which contains novel region proposal network designs.
arXiv Detail & Related papers (2020-06-30T02:23:05Z) - Synthetic Image Augmentation for Damage Region Segmentation using
Conditional GAN with Structure Edge [0.0]
We propose a synthetic augmentation procedure to generate damaged images using the image-to-image translation mapping.
We apply popular per-pixel segmentation algorithms such as the FCN-8s, SegNet, and DeepLabv3+Xception-v2.
We demonstrate that re-training a data set added with synthetic augmentation procedure make higher accuracy based on indices the mean IoU, damage region of interest IoU, precision, recall, BF score when we predict test images.
arXiv Detail & Related papers (2020-05-07T06:04:02Z) - RescueNet: Joint Building Segmentation and Damage Assessment from
Satellite Imagery [83.49145695899388]
RescueNet is a unified model that can simultaneously segment buildings and assess the damage levels to individual buildings and can be trained end-to-end.
RescueNet is tested on the large scale and diverse xBD dataset and achieves significantly better building segmentation and damage classification performance than previous methods.
arXiv Detail & Related papers (2020-04-15T19:52:09Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.