Stochastic Super-Resolution for Downscaling Time-Evolving Atmospheric
Fields with a Generative Adversarial Network
- URL: http://arxiv.org/abs/2005.10374v4
- Date: Mon, 19 Oct 2020 14:28:34 GMT
- Title: Stochastic Super-Resolution for Downscaling Time-Evolving Atmospheric
Fields with a Generative Adversarial Network
- Authors: Jussi Leinonen, Daniele Nerini, Alexis Berne
- Abstract summary: We introduce a recurrent, super-resolution GAN that can generate ensembles of time-evolving high-resolution atmospheric fields for an input consisting of a low-resolution sequence of images of the same field.
We find that the GAN can generate realistic, temporally consistent super-resolution sequences for both datasets.
As the GAN generator is fully convolutional, it can be applied after training to input images larger than the images used to train it.
- Score: 1.933681537640272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) have been recently adopted for
super-resolution, an application closely related to what is referred to as
"downscaling" in the atmospheric sciences: improving the spatial resolution of
low-resolution images. The ability of conditional GANs to generate an ensemble
of solutions for a given input lends itself naturally to stochastic
downscaling, but the stochastic nature of GANs is not usually considered in
super-resolution applications. Here, we introduce a recurrent, stochastic
super-resolution GAN that can generate ensembles of time-evolving
high-resolution atmospheric fields for an input consisting of a low-resolution
sequence of images of the same field. We test the GAN using two datasets, one
consisting of radar-measured precipitation from Switzerland, the other of cloud
optical thickness derived from the Geostationary Earth Observing Satellite 16
(GOES-16). We find that the GAN can generate realistic, temporally consistent
super-resolution sequences for both datasets. The statistical properties of the
generated ensemble are analyzed using rank statistics, a method adapted from
ensemble weather forecasting; these analyses indicate that the GAN produces
close to the correct amount of variability in its outputs. As the GAN generator
is fully convolutional, it can be applied after training to input images larger
than the images used to train it. It is also able to generate time series much
longer than the training sequences, as demonstrated by applying the generator
to a three-month dataset of the precipitation radar data. The source code to
our GAN is available at https://github.com/jleinonen/downscaling-rnn-gan.
Related papers
- FFEINR: Flow Feature-Enhanced Implicit Neural Representation for
Spatio-temporal Super-Resolution [4.577685231084759]
This paper proposes a Feature-Enhanced Neural Implicit Representation (FFEINR) for super-resolution of flow field data.
It can take full advantage of the implicit neural representation in terms of model structure and sampling resolution.
The training process of FFEINR is facilitated by introducing feature enhancements for the input layer.
arXiv Detail & Related papers (2023-08-24T02:28:18Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - FunkNN: Neural Interpolation for Functional Generation [23.964801524703052]
FunkNN is a new convolutional network which learns to reconstruct continuous images at arbitrary coordinates and can be applied to any image dataset.
We show that FunkNN generates high-quality continuous images and exhibits strong out-of-distribution performance thanks to its patch-based design.
arXiv Detail & Related papers (2022-12-20T16:37:20Z) - A Generative Deep Learning Approach to Stochastic Downscaling of
Precipitation Forecasts [0.5906031288935515]
Generative adversarial networks (GANs) have been demonstrated by the computer vision community to be successful at super-resolution problems.
We show that GANs and VAE-GANs can match the statistical properties of state-of-the-art pointwise post-processing methods whilst creating high-resolution, spatially coherent precipitation maps.
arXiv Detail & Related papers (2022-04-05T07:19:42Z) - Super-resolution GANs of randomly-seeded fields [68.8204255655161]
We propose a novel super-resolution generative adversarial network (GAN) framework to estimate field quantities from random sparse sensors.
The algorithm exploits random sampling to provide incomplete views of the high-resolution underlying distributions.
The proposed technique is tested on synthetic databases of fluid flow simulations, ocean surface temperature distributions measurements, and particle image velocimetry data.
arXiv Detail & Related papers (2022-02-23T18:57:53Z) - InfinityGAN: Towards Infinite-Resolution Image Synthesis [92.40782797030977]
We present InfinityGAN, a method to generate arbitrary-resolution images.
We show how it trains and infers patch-by-patch seamlessly with low computational resources.
arXiv Detail & Related papers (2021-04-08T17:59:30Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - Improving Generative Adversarial Networks with Local Coordinate Coding [150.24880482480455]
Generative adversarial networks (GANs) have shown remarkable success in generating realistic data from some predefined prior distribution.
In practice, semantic information might be represented by some latent distribution learned from data.
We propose an LCCGAN model with local coordinate coding (LCC) to improve the performance of generating data.
arXiv Detail & Related papers (2020-07-28T09:17:50Z) - Optimizing Generative Adversarial Networks for Image Super Resolution
via Latent Space Regularization [4.529132742139768]
Generative Adversarial Networks (GANs) try to learn the distribution of the real images in the manifold to generate samples that look real.
We probe for ways to alleviate these problems for supervised GANs in this paper.
arXiv Detail & Related papers (2020-01-22T16:27:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.