Fast, Scale-Adaptive, and Uncertainty-Aware Downscaling of Earth System
Model Fields with Generative Foundation Models
- URL: http://arxiv.org/abs/2403.02774v1
- Date: Tue, 5 Mar 2024 08:41:41 GMT
- Title: Fast, Scale-Adaptive, and Uncertainty-Aware Downscaling of Earth System
Model Fields with Generative Foundation Models
- Authors: Philipp Hess, Michael Aich, Baoxiang Pan, and Niklas Boers
- Abstract summary: We develop a consistency model (CM) that efficiently and accurately downscales arbitrary Earth system model (ESM) simulations without retraining in a zero-shot manner.
We show that the CM outperforms state-of-the-art diffusion models at a fraction of computational cost while maintaining high controllability on the downscaling task.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Accurate and high-resolution Earth system model (ESM) simulations are
essential to assess the ecological and socio-economic impacts of anthropogenic
climate change, but are computationally too expensive. Recent machine learning
approaches have shown promising results in downscaling ESM simulations,
outperforming state-of-the-art statistical approaches. However, existing
methods require computationally costly retraining for each ESM and extrapolate
poorly to climates unseen during training. We address these shortcomings by
learning a consistency model (CM) that efficiently and accurately downscales
arbitrary ESM simulations without retraining in a zero-shot manner. Our
foundation model approach yields probabilistic downscaled fields at resolution
only limited by the observational reference data. We show that the CM
outperforms state-of-the-art diffusion models at a fraction of computational
cost while maintaining high controllability on the downscaling task. Further,
our method generalizes to climate states unseen during training without
explicitly formulated physical constraints.
Related papers
- Towards Causal Representations of Climate Model Data [18.82507552857727]
This work delves into the potential of causal representation learning, specifically the emphCausal Discovery with Single-parent Decoding (CDSD) method.
Our findings shed light on the challenges, limitations, and promise of using CDSD as a stepping stone towards more interpretable and robust climate model emulation.
arXiv Detail & Related papers (2023-12-05T16:13:34Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - COPlanner: Plan to Roll Out Conservatively but to Explore Optimistically
for Model-Based RL [50.385005413810084]
Dyna-style model-based reinforcement learning contains two phases: model rollouts to generate sample for policy learning and real environment exploration.
$textttCOPlanner$ is a planning-driven framework for model-based methods to address the inaccurately learned dynamics model problem.
arXiv Detail & Related papers (2023-10-11T06:10:07Z) - A physics-constrained machine learning method for mapping gapless land
surface temperature [6.735896406986559]
In this paper, a physics- ML model is proposed to generate gapless LST with physical meanings and high accuracy.
The light-boosting machine (LGBM) model, which uses only remote sensing data as gradient input serves as the pure ML model.
Compared with a pure physical method and pure ML methods, the PC-LGBM model improves the prediction accuracy and physical interpretability of LST.
arXiv Detail & Related papers (2023-07-03T01:44:48Z) - DiffESM: Conditional Emulation of Earth System Models with Diffusion
Models [2.1989764549743476]
A key application of Earth System Models (ESMs) is studying extreme weather events, such as heat waves or dry spells.
We show that diffusion models can effectively emulate the trends of ESMs under previously unseen climate scenarios.
arXiv Detail & Related papers (2023-04-23T17:12:33Z) - Stabilizing Machine Learning Prediction of Dynamics: Noise and
Noise-inspired Regularization [58.720142291102135]
Recent has shown that machine learning (ML) models can be trained to accurately forecast the dynamics of chaotic dynamical systems.
In the absence of mitigating techniques, this technique can result in artificially rapid error growth, leading to inaccurate predictions and/or climate instability.
We introduce Linearized Multi-Noise Training (LMNT), a regularization technique that deterministically approximates the effect of many small, independent noise realizations added to the model input during training.
arXiv Detail & Related papers (2022-11-09T23:40:52Z) - Physically Constrained Generative Adversarial Networks for Improving
Precipitation Fields from Earth System Models [0.0]
Existing post-processing methods can improve ESM simulations locally, but cannot correct errors in modelled spatial patterns.
We propose a framework based on physically constrained generative adversarial networks (GANs) to improve local distributions and spatial structure simultaneously.
arXiv Detail & Related papers (2022-08-25T15:19:10Z) - Sample-Efficient Reinforcement Learning via Conservative Model-Based
Actor-Critic [67.00475077281212]
Model-based reinforcement learning algorithms are more sample efficient than their model-free counterparts.
We propose a novel approach that achieves high sample efficiency without the strong reliance on accurate learned models.
We show that CMBAC significantly outperforms state-of-the-art approaches in terms of sample efficiency on several challenging tasks.
arXiv Detail & Related papers (2021-12-16T15:33:11Z) - Reinforcement Learning for Adaptive Mesh Refinement [63.7867809197671]
We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning to train refinement policies directly from simulation.
The model sizes of these policy architectures are independent of the mesh size and hence scale to arbitrarily large and complex simulations.
arXiv Detail & Related papers (2021-03-01T22:55:48Z) - Model-based Policy Optimization with Unsupervised Model Adaptation [37.09948645461043]
We investigate how to bridge the gap between real and simulated data due to inaccurate model estimation for better policy optimization.
We propose a novel model-based reinforcement learning framework AMPO, which introduces unsupervised model adaptation.
Our approach achieves state-of-the-art performance in terms of sample efficiency on a range of continuous control benchmark tasks.
arXiv Detail & Related papers (2020-10-19T14:19:42Z) - No MCMC for me: Amortized sampling for fast and stable training of
energy-based models [62.1234885852552]
Energy-Based Models (EBMs) present a flexible and appealing way to represent uncertainty.
We present a simple method for training EBMs at scale using an entropy-regularized generator to amortize the MCMC sampling.
Next, we apply our estimator to the recently proposed Joint Energy Model (JEM), where we match the original performance with faster and stable training.
arXiv Detail & Related papers (2020-10-08T19:17:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.