Can AI be enabled to dynamical downscaling? A Latent Diffusion Model to mimic km-scale COSMO5.0\_CLM9 simulations
- URL: http://arxiv.org/abs/2406.13627v2
- Date: Thu, 22 Aug 2024 08:46:14 GMT
- Title: Can AI be enabled to dynamical downscaling? A Latent Diffusion Model to mimic km-scale COSMO5.0\_CLM9 simulations
- Authors: Elena Tomasi, Gabriele Franch, Marco Cristoforetti,
- Abstract summary: Downscaling techniques are one of the most prominent applications of Deep Learning (DL) in Earth System Modeling.
In this study, we apply a Latent Diffusion Model (LDM) to downscale ERA5 data over Italy up to a resolution of 2 km.
Our goal is to demonstrate that recent advancements in generative modeling enable DL to deliver results comparable to those of numerical dynamical models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Downscaling techniques are one of the most prominent applications of Deep Learning (DL) in Earth System Modeling. A robust DL downscaling model can generate high-resolution fields from coarse-scale numerical model simulations, saving the timely and resourceful applications of regional/local models. Additionally, generative DL models have the potential to provide uncertainty information, by generating ensemble-like scenario pools, a task that is computationally prohibitive for traditional numerical simulations. In this study, we apply a Latent Diffusion Model (LDM) to downscale ERA5 data over Italy up to a resolution of 2 km. The high-resolution target data consists of 2-m temperature and 10-m horizontal wind components from a dynamical downscaling performed with COSMO_CLM. Our goal is to demonstrate that recent advancements in generative modeling enable DL to deliver results comparable to those of numerical dynamical models, given the same input data, preserving the realism of fine-scale features and flow characteristics. A selection of predictors from ERA5 is used as input to the LDM, and a residual approach against a reference UNET is leveraged in applying the LDM. The performance of the generative LDM is compared with reference baselines of increasing complexity: quadratic interpolation of ERA5, a UNET, and a Generative Adversarial Network (GAN) built on the same reference UNET. Results highlight the improvements introduced by the LDM architecture and the residual approach over these baselines. The models are evaluated on a yearly test dataset, assessing the models' performance through deterministic metrics, spatial distribution of errors, and reconstruction of frequency and power spectra distributions.
Related papers
- Koopman-Based Surrogate Modelling of Turbulent Rayleigh-BĂ©nard Convection [4.248022697109535]
We use a Koopman-inspired architecture called the Linear Recurrent Autoencoder Network (LRAN) for learning reduced-order dynamics in convection flows.
A traditional fluid dynamics method, the Kernel Dynamic Mode Decomposition (KDMD) is used to compare the LRAN.
We obtained more accurate predictions with the LRAN than with KDMD in the most turbulent setting.
arXiv Detail & Related papers (2024-05-10T12:15:02Z) - Generalization capabilities and robustness of hybrid machine learning models grounded in flow physics compared to purely deep learning models [2.8686437689115363]
This study investigates the generalization capabilities and robustness of purely deep learning (DL) models and hybrid models based on physical principles in fluid dynamics applications.
Three autoregressive models were compared: a convolutional autoencoder combined with a convolutional LSTM, a variational autoencoder (VAE) combined with a ConvLSTM and a hybrid model that combines proper decomposition (POD) with a LSTM (POD-DL)
While the VAE and ConvLSTM models accurately predicted laminar flow, the hybrid POD-DL model outperformed the others across both laminar and turbulent flow regimes.
arXiv Detail & Related papers (2024-04-27T12:43:02Z) - Synthetic location trajectory generation using categorical diffusion
models [50.809683239937584]
Diffusion models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data.
We propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals.
arXiv Detail & Related papers (2024-02-19T15:57:39Z) - Diffusion-Model-Assisted Supervised Learning of Generative Models for
Density Estimation [10.793646707711442]
We present a framework for training generative models for density estimation.
We use the score-based diffusion model to generate labeled data.
Once the labeled data are generated, we can train a simple fully connected neural network to learn the generative model in the supervised manner.
arXiv Detail & Related papers (2023-10-22T23:56:19Z) - Generative Modeling with Phase Stochastic Bridges [49.4474628881673]
Diffusion models (DMs) represent state-of-the-art generative models for continuous inputs.
We introduce a novel generative modeling framework grounded in textbfphase space dynamics
Our framework demonstrates the capability to generate realistic data points at an early stage of dynamics propagation.
arXiv Detail & Related papers (2023-10-11T18:38:28Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - A Neural PDE Solver with Temporal Stencil Modeling [44.97241931708181]
Recent Machine Learning (ML) models have shown new promises in capturing important dynamics in high-resolution signals.
This study shows that significant information is often lost in the low-resolution down-sampled features.
We propose a new approach, which combines the strengths of advanced time-series sequence modeling and state-of-the-art neural PDE solvers.
arXiv Detail & Related papers (2023-02-16T06:13:01Z) - DeepVARwT: Deep Learning for a VAR Model with Trend [1.9862987223379664]
We propose a new approach that employs deep learning methodology for maximum likelihood estimation of the trend and the dependence structure.
A Long Short-Term Memory (LSTM) network is used for this purpose.
We provide a simulation study and an application to real data.
arXiv Detail & Related papers (2022-09-21T18:23:03Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Dynamic Mode Decomposition in Adaptive Mesh Refinement and Coarsening
Simulations [58.720142291102135]
Dynamic Mode Decomposition (DMD) is a powerful data-driven method used to extract coherent schemes.
This paper proposes a strategy to enable DMD to extract from observations with different mesh topologies and dimensions.
arXiv Detail & Related papers (2021-04-28T22:14:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.