SRNDiff: Short-term Rainfall Nowcasting with Condition Diffusion Model
- URL: http://arxiv.org/abs/2402.13737v1
- Date: Wed, 21 Feb 2024 12:06:06 GMT
- Title: SRNDiff: Short-term Rainfall Nowcasting with Condition Diffusion Model
- Authors: Xudong Ling, Chaorong Li, Fengqing Qin, Peng Yang, Yuanyuan Huang
- Abstract summary: We introduce the diffusion model to the precipitation forecasting task.
We propose a short-term precipitation nowcasting with condition diffusion model based on historical observational data.
By incorporating an additional conditional decoder module in the denoising process, SRNDiff achieves end-to-end conditional rainfall prediction.
- Score: 5.21064926344773
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models are widely used in image generation because they can
generate high-quality and realistic samples. This is in contrast to generative
adversarial networks (GANs) and variational autoencoders (VAEs), which have
some limitations in terms of image quality.We introduce the diffusion model to
the precipitation forecasting task and propose a short-term precipitation
nowcasting with condition diffusion model based on historical observational
data, which is referred to as SRNDiff. By incorporating an additional
conditional decoder module in the denoising process, SRNDiff achieves
end-to-end conditional rainfall prediction. SRNDiff is composed of two
networks: a denoising network and a conditional Encoder network. The
conditional network is composed of multiple independent UNet networks. These
networks extract conditional feature maps at different resolutions, providing
accurate conditional information that guides the diffusion model for
conditional generation.SRNDiff surpasses GANs in terms of prediction accuracy,
although it requires more computational resources.The SRNDiff model exhibits
higher stability and efficiency during training than GANs-based approaches, and
generates high-quality precipitation distribution samples that better reflect
future actual precipitation conditions. This fully validates the advantages and
potential of diffusion models in precipitation forecasting, providing new
insights for enhancing rainfall prediction.
Related papers
- SAR Image Synthesis with Diffusion Models [0.0]
diffusion models (DMs) have become a popular method for generating synthetic data.
In this work, a specific type of DMs, namely denoising diffusion probabilistic model (DDPM) is adapted to the SAR domain.
We show that DDPM qualitatively and quantitatively outperforms state-of-the-art GAN-based methods for SAR image generation.
arXiv Detail & Related papers (2024-05-13T14:21:18Z) - Unveil Conditional Diffusion Models with Classifier-free Guidance: A Sharp Statistical Theory [87.00653989457834]
Conditional diffusion models serve as the foundation of modern image synthesis and find extensive application in fields like computational biology and reinforcement learning.
Despite the empirical success, theory of conditional diffusion models is largely missing.
This paper bridges the gap by presenting a sharp statistical theory of distribution estimation using conditional diffusion models.
arXiv Detail & Related papers (2024-03-18T17:08:24Z) - Neural Network Parameter Diffusion [50.85251415173792]
Diffusion models have achieved remarkable success in image and video generation.
In this work, we demonstrate that diffusion models can also.
generate high-performing neural network parameters.
arXiv Detail & Related papers (2024-02-20T16:59:03Z) - CADS: Unleashing the Diversity of Diffusion Models through Condition-Annealed Sampling [27.795088366122297]
Condition-Annealed Diffusion Sampler (CADS) can be used with any pretrained model and sampling algorithm.
We show that it boosts the diversity of diffusion models in various conditional generation tasks.
arXiv Detail & Related papers (2023-10-26T12:27:56Z) - Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional
Image Synthesis [62.07413805483241]
Steered Diffusion is a framework for zero-shot conditional image generation using a diffusion model trained for unconditional generation.
We present experiments using steered diffusion on several tasks including inpainting, colorization, text-guided semantic editing, and image super-resolution.
arXiv Detail & Related papers (2023-09-30T02:03:22Z) - Bayesian Flow Networks [4.585102332532472]
This paper introduces Bayesian Flow Networks (BFNs), a new class of generative model in which the parameters of a set of independent distributions are modified with Bayesian inference.
Starting from a simple prior and iteratively updating the two distributions yields a generative procedure similar to the reverse process of diffusion models.
BFNs achieve competitive log-likelihoods for image modelling on dynamically binarized MNIST and CIFAR-10, and outperform all known discrete diffusion models on the text8 character-level language modelling task.
arXiv Detail & Related papers (2023-08-14T09:56:35Z) - Precipitation nowcasting with generative diffusion models [0.0]
We study the efficacy of diffusion models in handling the task of precipitation nowcasting.
Our work is conducted in comparison to the performance of well-established U-Net models.
arXiv Detail & Related papers (2023-08-13T09:51:16Z) - Conditional Generation from Unconditional Diffusion Models using
Denoiser Representations [94.04631421741986]
We propose adapting pre-trained unconditional diffusion models to new conditions using the learned internal representations of the denoiser network.
We show that augmenting the Tiny ImageNet training set with synthetic images generated by our approach improves the classification accuracy of ResNet baselines by up to 8%.
arXiv Detail & Related papers (2023-06-02T20:09:57Z) - Solving Diffusion ODEs with Optimal Boundary Conditions for Better Image Super-Resolution [82.50210340928173]
randomness of diffusion models results in ineffectiveness and instability, making it challenging for users to guarantee the quality of SR results.
We propose a plug-and-play sampling method that owns the potential to benefit a series of diffusion-based SR methods.
The quality of SR results sampled by the proposed method with fewer steps outperforms the quality of results sampled by current methods with randomness from the same pre-trained diffusion-based SR model.
arXiv Detail & Related papers (2023-05-24T17:09:54Z) - Latent diffusion models for generative precipitation nowcasting with
accurate uncertainty quantification [1.7718093866806544]
We introduce a latent diffusion model (LDM) for precipitation nowcasting - short-term forecasting based on the latest observational data.
We benchmark it against the GAN-based Deep Generative Models of Rainfall (DGMR) and a statistical model, PySTEPS.
The clearest advantage of the LDM is that it generates more diverse predictions than DGMR or PySTEPS.
arXiv Detail & Related papers (2023-04-25T15:03:15Z) - Denoising Diffusion Probabilistic Models [91.94962645056896]
We present high quality image synthesis results using diffusion probabilistic models.
Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics.
arXiv Detail & Related papers (2020-06-19T17:24:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.