Physically-Consistent Generative Adversarial Networks for Coastal Flood
Visualization
- URL: http://arxiv.org/abs/2104.04785v1
- Date: Sat, 10 Apr 2021 15:00:15 GMT
- Title: Physically-Consistent Generative Adversarial Networks for Coastal Flood
Visualization
- Authors: Bj\"orn L\"utjens, Brandon Leshchinskiy, Christian Requena-Mesa,
Farrukh Chishtie, Natalia D\'iaz-Rodr\'iguez, Oc\'eane Boulais, Aruna
Sankaranarayanan, Aaron Pi\~na, Yarin Gal, Chedy Ra\"issi, Alexander Lavin,
Dava Newman
- Abstract summary: We propose the first deep learning pipeline to ensure physical-consistency in synthetic visual satellite imagery.
By evaluating the imagery relative to physics-based flood maps, we find that our proposed framework outperforms baseline models in both physical-consistency and photorealism.
We publish a dataset of over 25k labelled image-pairs to study image-to-image translation in Earth observation.
- Score: 60.690929022840685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As climate change increases the intensity of natural disasters, society needs
better tools for adaptation. Floods, for example, are the most frequent natural
disaster, and better tools for flood risk communication could increase the
support for flood-resilient infrastructure development. Our work aims to enable
more visual communication of large-scale climate impacts via visualizing the
output of coastal flood models as satellite imagery. We propose the first deep
learning pipeline to ensure physical-consistency in synthetic visual satellite
imagery. We advanced a state-of-the-art GAN called pix2pixHD, such that it
produces imagery that is physically-consistent with the output of an
expert-validated storm surge model (NOAA SLOSH). By evaluating the imagery
relative to physics-based flood maps, we find that our proposed framework
outperforms baseline models in both physical-consistency and photorealism. We
envision our work to be the first step towards a global visualization of how
climate change shapes our landscape. Continuing on this path, we show that the
proposed pipeline generalizes to visualize arctic sea ice melt. We also publish
a dataset of over 25k labelled image-pairs to study image-to-image translation
in Earth observation.
Related papers
- AerialMegaDepth: Learning Aerial-Ground Reconstruction and View Synthesis [57.249817395828174]
We propose a scalable framework combining pseudo-synthetic renderings from 3D city-wide meshes with real, ground-level crowd-sourced images.
The pseudo-synthetic data simulates a wide range of aerial viewpoints, while the real, crowd-sourced images help improve visual fidelity for ground-level images.
Using this hybrid dataset, we fine-tune several state-of-the-art algorithms and achieve significant improvements on real-world, zero-shot aerial-ground tasks.
arXiv Detail & Related papers (2025-04-17T17:57:05Z) - Infinite Leagues Under the Sea: Photorealistic 3D Underwater Terrain Generation by Latent Fractal Diffusion Models [13.58353565350936]
We introduce DreamSea, a generative model to generate hyper-realistic underwater scenes.
DreamSea is trained on real-world image databases collected from underwater robot surveys.
arXiv Detail & Related papers (2025-03-09T21:43:37Z) - A General Albedo Recovery Approach for Aerial Photogrammetric Images through Inverse Rendering [7.874736360019618]
This paper presents a general image formation model for albedo recovery from typical aerial photogrammetric images under natural illuminations.
Our approach builds on the fact that both the sun illumination and scene geometry are estimable in aerial photogrammetry.
arXiv Detail & Related papers (2024-09-04T18:58:32Z) - Physics-Inspired Synthesized Underwater Image Dataset [9.117162374919715]
PHISWID is a dataset tailored for enhancing underwater image processing through physics-inspired image synthesis.
Our dataset contributes to the development in underwater image processing.
arXiv Detail & Related papers (2024-04-05T10:23:10Z) - Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - Stable Rivers: A Case Study in the Application of Text-to-Image
Generative Models for Earth Sciences [0.0]
Text-to-image (TTI) generative models can be used to generate images from a given text-string input.
We evaluated subject-area specific biases in the training data and model performance of Stable Diffusion.
We found that the training data over-represented scenic locations, such as famous rivers and waterfalls, and showed serious under-representation of morphological and environmental terms.
arXiv Detail & Related papers (2023-12-13T01:40:21Z) - HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion [114.15397904945185]
We propose a unified framework, HyperHuman, that generates in-the-wild human images of high realism and diverse layouts.
Our model enforces the joint learning of image appearance, spatial relationship, and geometry in a unified network.
Our framework yields the state-of-the-art performance, generating hyper-realistic human images under diverse scenarios.
arXiv Detail & Related papers (2023-10-12T17:59:34Z) - Seafloor-Invariant Caustics Removal from Underwater Imagery [0.0]
Caustics are complex physical phenomena resulting from the projection of light rays being refracted by the wavy surface.
In this work, we propose a novel method for correcting the effects of caustics on shallow underwater imagery.
In particular, the developed method employs deep learning architectures in order to classify image pixels to "non-caustics" and "caustics"
arXiv Detail & Related papers (2022-12-20T11:11:02Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - A Shared Representation for Photorealistic Driving Simulators [83.5985178314263]
We propose to improve the quality of generated images by rethinking the discriminator architecture.
The focus is on the class of problems where images are generated given semantic inputs, such as scene segmentation maps or human body poses.
We aim to learn a shared latent representation that encodes enough information to jointly do semantic segmentation, content reconstruction, along with a coarse-to-fine grained adversarial reasoning.
arXiv Detail & Related papers (2021-12-09T18:59:21Z) - Physics-informed GANs for Coastal Flood Visualization [65.54626149826066]
We create a deep learning pipeline that generates visual satellite images of current and future coastal flooding.
By evaluating the imagery relative to physics-based flood maps, we find that our proposed framework outperforms baseline models in both physical-consistency and photorealism.
While this work focused on the visualization of coastal floods, we envision the creation of a global visualization of how climate change will shape our earth.
arXiv Detail & Related papers (2020-10-16T02:15:34Z) - Breaking the Limits of Remote Sensing by Simulation and Deep Learning
for Flood and Debris Flow Mapping [13.167695669500391]
We propose a framework that estimates inundation depth and debris-flow-induced topographic deformation from remote sensing imagery.
A water and debris flow simulator generates training data for various artificial disaster scenarios.
We show that regression models based on Attention U-Net and LinkNet architectures trained on such synthetic data can predict the maximum water level and topographic deformation.
arXiv Detail & Related papers (2020-06-09T10:59:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.