Physically-Consistent Generative Adversarial Networks for Coastal Flood
Visualization
- URL: http://arxiv.org/abs/2104.04785v1
- Date: Sat, 10 Apr 2021 15:00:15 GMT
- Title: Physically-Consistent Generative Adversarial Networks for Coastal Flood
Visualization
- Authors: Bj\"orn L\"utjens, Brandon Leshchinskiy, Christian Requena-Mesa,
Farrukh Chishtie, Natalia D\'iaz-Rodr\'iguez, Oc\'eane Boulais, Aruna
Sankaranarayanan, Aaron Pi\~na, Yarin Gal, Chedy Ra\"issi, Alexander Lavin,
Dava Newman
- Abstract summary: We propose the first deep learning pipeline to ensure physical-consistency in synthetic visual satellite imagery.
By evaluating the imagery relative to physics-based flood maps, we find that our proposed framework outperforms baseline models in both physical-consistency and photorealism.
We publish a dataset of over 25k labelled image-pairs to study image-to-image translation in Earth observation.
- Score: 60.690929022840685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As climate change increases the intensity of natural disasters, society needs
better tools for adaptation. Floods, for example, are the most frequent natural
disaster, and better tools for flood risk communication could increase the
support for flood-resilient infrastructure development. Our work aims to enable
more visual communication of large-scale climate impacts via visualizing the
output of coastal flood models as satellite imagery. We propose the first deep
learning pipeline to ensure physical-consistency in synthetic visual satellite
imagery. We advanced a state-of-the-art GAN called pix2pixHD, such that it
produces imagery that is physically-consistent with the output of an
expert-validated storm surge model (NOAA SLOSH). By evaluating the imagery
relative to physics-based flood maps, we find that our proposed framework
outperforms baseline models in both physical-consistency and photorealism. We
envision our work to be the first step towards a global visualization of how
climate change shapes our landscape. Continuing on this path, we show that the
proposed pipeline generalizes to visualize arctic sea ice melt. We also publish
a dataset of over 25k labelled image-pairs to study image-to-image translation
in Earth observation.
Related papers
- Generalizable Disaster Damage Assessment via Change Detection with Vision Foundation Model [17.016411785224317]
We present DAVI (Disaster Assessment with VIsion foundation model), which overcomes domain disparities and detects structural damage without requiring ground-truth labels of the target region.
DAVI integrates task-specific knowledge from a model trained on source regions with an image segmentation foundation model to generate pseudo labels of possible damage in the target region.
It then employs a two-stage refinement process, targeting both the pixel and overall image, to more accurately pinpoint changes in disaster-struck areas.
arXiv Detail & Related papers (2024-06-12T09:21:28Z) - Deep Vision-Based Framework for Coastal Flood Prediction Under Climate Change Impacts and Shoreline Adaptations [0.3413711585591077]
We present a systematic framework for training high-fidelity Deep Vision-based coastal flood prediction models in low-data settings.
We also introduce a deep CNN architecture tailored specifically to the coastal flood prediction problem at hand.
The performance of the developed DL models is validated against commonly adopted geostatistical regression methods.
arXiv Detail & Related papers (2024-06-06T19:54:34Z) - Robust Disaster Assessment from Aerial Imagery Using Text-to-Image Synthetic Data [66.49494950674402]
We leverage emerging text-to-image generative models in creating large-scale synthetic supervision for the task of damage assessment from aerial images.
We build an efficient and easily scalable pipeline to generate thousands of post-disaster images from low-resource domains.
We validate the strength of our proposed framework under cross-geography domain transfer setting from xBD and SKAI images in both single-source and multi-source settings.
arXiv Detail & Related papers (2024-05-22T16:07:05Z) - An Architecture for the detection of GAN-generated Flood Images with
Localization Capabilities [36.85653682256554]
We propose a hybrid deep learning architecture including both a detection and a localization branch.
We find that adding a localization branch helps the network to focus on the most relevant image regions.
The good performance of the proposed architecture is validated on two datasets of pristine flood images downloaded from the internet and three datasets of fake flood images generated by ClimateGAN.
arXiv Detail & Related papers (2022-05-14T14:23:44Z) - ClimateGAN: Raising Climate Change Awareness by Generating Images of
Floods [89.61670857155173]
We present our solution to simulate photo-realistic floods on authentic images.
We propose ClimateGAN, a model that leverages both simulated and real data for unsupervised domain adaptation and conditional image generation.
arXiv Detail & Related papers (2021-10-06T15:54:57Z) - Enhancing Photorealism Enhancement [83.88433283714461]
We present an approach to enhancing the realism of synthetic images using a convolutional network.
We analyze scene layout distributions in commonly used datasets and find that they differ in important ways.
We report substantial gains in stability and realism in comparison to recent image-to-image translation methods.
arXiv Detail & Related papers (2021-05-10T19:00:49Z) - Physics-informed GANs for Coastal Flood Visualization [65.54626149826066]
We create a deep learning pipeline that generates visual satellite images of current and future coastal flooding.
By evaluating the imagery relative to physics-based flood maps, we find that our proposed framework outperforms baseline models in both physical-consistency and photorealism.
While this work focused on the visualization of coastal floods, we envision the creation of a global visualization of how climate change will shape our earth.
arXiv Detail & Related papers (2020-10-16T02:15:34Z) - Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement [78.58603635621591]
Training an unpaired synthetic-to-real translation network in image space is severely under-constrained.
We propose a semi-supervised approach that operates on the disentangled shading and albedo layers of the image.
Our two-stage pipeline first learns to predict accurate shading in a supervised fashion using physically-based renderings as targets.
arXiv Detail & Related papers (2020-03-27T21:45:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.