A non-intrusive machine learning framework for debiasing long-time
coarse resolution climate simulations and quantifying rare events statistics
- URL: http://arxiv.org/abs/2402.18484v1
- Date: Wed, 28 Feb 2024 17:06:19 GMT
- Title: A non-intrusive machine learning framework for debiasing long-time
coarse resolution climate simulations and quantifying rare events statistics
- Authors: Benedikt Barthel Sorensen, Alexis Charalampopoulos, Shixuan Zhang,
Bryce Harrop, Ruby Leung, Themistoklis Sapsis
- Abstract summary: coarse models suffer from inherent bias due to the ignored "sub-grid" scales.
We propose a framework to non-intrusively debias coarse-resolution climate predictions using neural-network (NN) correction operators.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the rapidly changing climate, the frequency and severity of extreme
weather is expected to increase over the coming decades. As fully-resolved
climate simulations remain computationally intractable, policy makers must rely
on coarse-models to quantify risk for extremes. However, coarse models suffer
from inherent bias due to the ignored "sub-grid" scales. We propose a framework
to non-intrusively debias coarse-resolution climate predictions using
neural-network (NN) correction operators. Previous efforts have attempted to
train such operators using loss functions that match statistics. However, this
approach falls short with events that have longer return period than that of
the training data, since the reference statistics have not converged. Here, the
scope is to formulate a learning method that allows for correction of dynamics
and quantification of extreme events with longer return period than the
training data. The key obstacle is the chaotic nature of the underlying
dynamics. To overcome this challenge, we introduce a dynamical systems approach
where the correction operator is trained using reference data and a coarse
model simulation nudged towards that reference. The method is demonstrated on
debiasing an under-resolved quasi-geostrophic model and the Energy Exascale
Earth System Model (E3SM). For the former, our method enables the
quantification of events that have return period two orders longer than the
training data. For the latter, when trained on 8 years of ERA5 data, our
approach is able to correct the coarse E3SM output to closely reflect the
36-year ERA5 statistics for all prognostic variables and significantly reduce
their spatial biases.
Related papers
- On conditional diffusion models for PDE simulations [53.01911265639582]
We study score-based diffusion models for forecasting and assimilation of sparse observations.
We propose an autoregressive sampling approach that significantly improves performance in forecasting.
We also propose a new training strategy for conditional score-based models that achieves stable performance over a range of history lengths.
arXiv Detail & Related papers (2024-10-21T18:31:04Z) - ReFine: Boosting Time Series Prediction of Extreme Events by Reweighting and Fine-tuning [0.0]
Extreme events are of great importance since they represent impactive occurrences.
accurately predicting these extreme events is challenging due to their rarity and irregularity.
We propose two strategies, reweighting and fine-tuning, to tackle the challenge.
arXiv Detail & Related papers (2024-09-21T19:29:29Z) - A probabilistic framework for learning non-intrusive corrections to long-time climate simulations from short-time training data [12.566163525039558]
We present a strategy for training neural network models to non-intrusively correct under-resolved long-time simulations of chaotic systems.
We demonstrate its ability to accurately predict the anisotropic statistics over time horizons more than 30 times longer than the data seen in training.
arXiv Detail & Related papers (2024-08-02T18:34:30Z) - LARA: A Light and Anti-overfitting Retraining Approach for Unsupervised
Time Series Anomaly Detection [49.52429991848581]
We propose a Light and Anti-overfitting Retraining Approach (LARA) for deep variational auto-encoder based time series anomaly detection methods (VAEs)
This work aims to make three novel contributions: 1) the retraining process is formulated as a convex problem and can converge at a fast rate as well as prevent overfitting; 2) designing a ruminate block, which leverages the historical data without the need to store them; and 3) mathematically proving that when fine-tuning the latent vector and reconstructed data, the linear formations can achieve the least adjusting errors between the ground truths and the fine-tuned ones.
arXiv Detail & Related papers (2023-10-09T12:36:16Z) - How to Learn and Generalize From Three Minutes of Data:
Physics-Constrained and Uncertainty-Aware Neural Stochastic Differential
Equations [24.278738290287293]
We present a framework and algorithms to learn controlled dynamics models using neural differential equations (SDEs)
We construct the drift term to leverage a priori physics knowledge as inductive bias, and we design the diffusion term to represent a distance-aware estimate of the uncertainty in the learned model's predictions.
We demonstrate these capabilities through experiments on simulated robotic systems, as well as by using them to model and control a hexacopter's flight dynamics.
arXiv Detail & Related papers (2023-06-10T02:33:34Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - ClimaX: A foundation model for weather and climate [51.208269971019504]
ClimaX is a deep learning model for weather and climate science.
It can be pre-trained with a self-supervised learning objective on climate datasets.
It can be fine-tuned to address a breadth of climate and weather tasks.
arXiv Detail & Related papers (2023-01-24T23:19:01Z) - Stabilizing Machine Learning Prediction of Dynamics: Noise and
Noise-inspired Regularization [58.720142291102135]
Recent has shown that machine learning (ML) models can be trained to accurately forecast the dynamics of chaotic dynamical systems.
In the absence of mitigating techniques, this technique can result in artificially rapid error growth, leading to inaccurate predictions and/or climate instability.
We introduce Linearized Multi-Noise Training (LMNT), a regularization technique that deterministically approximates the effect of many small, independent noise realizations added to the model input during training.
arXiv Detail & Related papers (2022-11-09T23:40:52Z) - Long-term stability and generalization of observationally-constrained
stochastic data-driven models for geophysical turbulence [0.19686770963118383]
Deep learning models can mitigate certain biases in current state-of-the-art weather models.
Data-driven models require a lot of training data which may not be available from reanalysis (observational data) products.
deterministic data-driven forecasting models suffer from issues with long-term stability and unphysical climate drift.
We propose a convolutional variational autoencoder-based data-driven model that is pre-trained on an imperfect climate model simulation.
arXiv Detail & Related papers (2022-05-09T23:52:37Z) - State-Space Models Win the IEEE DataPort Competition on Post-covid
Day-ahead Electricity Load Forecasting [0.0]
We present the winning strategy of an electricity demand forecasting competition.
This competition was organized to design new forecasting methods for unstable periods such as the one starting in Spring 2020.
We rely on state-space models to adapt standard statistical and machine learning models.
arXiv Detail & Related papers (2021-10-01T11:57:37Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.