Generating Physically-Consistent Satellite Imagery for Climate Visualizations
- URL: http://arxiv.org/abs/2104.04785v5
- Date: Mon, 21 Oct 2024 15:50:51 GMT
- Title: Generating Physically-Consistent Satellite Imagery for Climate Visualizations
- Authors: Björn Lütjens, Brandon Leshchinskiy, Océane Boulais, Farrukh Chishtie, Natalia Díaz-Rodríguez, Margaux Masson-Forsythe, Ana Mata-Payerro, Christian Requena-Mesa, Aruna Sankaranarayanan, Aaron Piña, Yarin Gal, Chedy Raïssi, Alexander Lavin, Dava Newman,
- Abstract summary: We train a generative adversarial network to create synthetic satellite imagery of future flooding and reforestation events.
A pure deep learning-based model can generate flood visualizations but hallucinates floods at locations that were not susceptible to flooding.
We publish our code and dataset for segmentation guided image-to-image translation in Earth observation.
- Score: 53.61991820941501
- License:
- Abstract: Deep generative vision models are now able to synthesize realistic-looking satellite imagery. But, the possibility of hallucinations prevents their adoption for risk-sensitive applications, such as generating materials for communicating climate change. To demonstrate this issue, we train a generative adversarial network (pix2pixHD) to create synthetic satellite imagery of future flooding and reforestation events. We find that a pure deep learning-based model can generate photorealistic flood visualizations but hallucinates floods at locations that were not susceptible to flooding. To address this issue, we propose to condition and evaluate generative vision models on segmentation maps of physics-based flood models. We show that our physics-conditioned model outperforms the pure deep learning-based model and a handcrafted baseline. We evaluate the generalization capability of our method to different remote sensing data and different climate-related events (reforestation). We publish our code and dataset which includes the data for a third case study of melting Arctic sea ice and $>$30,000 labeled HD image triplets -- or the equivalent of 5.5 million images at 128x128 pixels -- for segmentation guided image-to-image translation in Earth observation. Code and data is available at \url{https://github.com/blutjens/eie-earth-public}.
Related papers
- A General Albedo Recovery Approach for Aerial Photogrammetric Images through Inverse Rendering [7.874736360019618]
This paper presents a general image formation model for albedo recovery from typical aerial photogrammetric images under natural illuminations.
Our approach builds on the fact that both the sun illumination and scene geometry are estimable in aerial photogrammetry.
arXiv Detail & Related papers (2024-09-04T18:58:32Z) - Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - Stable Rivers: A Case Study in the Application of Text-to-Image
Generative Models for Earth Sciences [0.0]
Text-to-image (TTI) generative models can be used to generate images from a given text-string input.
We evaluated subject-area specific biases in the training data and model performance of Stable Diffusion.
We found that the training data over-represented scenic locations, such as famous rivers and waterfalls, and showed serious under-representation of morphological and environmental terms.
arXiv Detail & Related papers (2023-12-13T01:40:21Z) - HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion [114.15397904945185]
We propose a unified framework, HyperHuman, that generates in-the-wild human images of high realism and diverse layouts.
Our model enforces the joint learning of image appearance, spatial relationship, and geometry in a unified network.
Our framework yields the state-of-the-art performance, generating hyper-realistic human images under diverse scenarios.
arXiv Detail & Related papers (2023-10-12T17:59:34Z) - Seafloor-Invariant Caustics Removal from Underwater Imagery [0.0]
Caustics are complex physical phenomena resulting from the projection of light rays being refracted by the wavy surface.
In this work, we propose a novel method for correcting the effects of caustics on shallow underwater imagery.
In particular, the developed method employs deep learning architectures in order to classify image pixels to "non-caustics" and "caustics"
arXiv Detail & Related papers (2022-12-20T11:11:02Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - A Shared Representation for Photorealistic Driving Simulators [83.5985178314263]
We propose to improve the quality of generated images by rethinking the discriminator architecture.
The focus is on the class of problems where images are generated given semantic inputs, such as scene segmentation maps or human body poses.
We aim to learn a shared latent representation that encodes enough information to jointly do semantic segmentation, content reconstruction, along with a coarse-to-fine grained adversarial reasoning.
arXiv Detail & Related papers (2021-12-09T18:59:21Z) - Physics-informed GANs for Coastal Flood Visualization [65.54626149826066]
We create a deep learning pipeline that generates visual satellite images of current and future coastal flooding.
By evaluating the imagery relative to physics-based flood maps, we find that our proposed framework outperforms baseline models in both physical-consistency and photorealism.
While this work focused on the visualization of coastal floods, we envision the creation of a global visualization of how climate change will shape our earth.
arXiv Detail & Related papers (2020-10-16T02:15:34Z) - Breaking the Limits of Remote Sensing by Simulation and Deep Learning
for Flood and Debris Flow Mapping [13.167695669500391]
We propose a framework that estimates inundation depth and debris-flow-induced topographic deformation from remote sensing imagery.
A water and debris flow simulator generates training data for various artificial disaster scenarios.
We show that regression models based on Attention U-Net and LinkNet architectures trained on such synthetic data can predict the maximum water level and topographic deformation.
arXiv Detail & Related papers (2020-06-09T10:59:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.