Reconstructing Historical Climate Fields With Deep Learning
- URL: http://arxiv.org/abs/2311.18348v2
- Date: Wed, 30 Jul 2025 09:09:40 GMT
- Title: Reconstructing Historical Climate Fields With Deep Learning
- Authors: Nils Bochow, Anna Poltronieri, Martin Rypdal, Niklas Boers,
- Abstract summary: We employ a recently introduced deep-learning approach based on Fourier convolutions, trained on numerical climate model output, to reconstruct historical climate fields.<n>We are able to realistically reconstruct large and irregular areas of missing data, as well as reconstruct known historical events such as strong El Nino and La Nina with very little given information.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Historical records of climate fields are often sparse due to missing measurements, especially before the introduction of large-scale satellite missions. Several statistical and model-based methods have been introduced to fill gaps and reconstruct historical records. Here, we employ a recently introduced deep-learning approach based on Fourier convolutions, trained on numerical climate model output, to reconstruct historical climate fields. Using this approach we are able to realistically reconstruct large and irregular areas of missing data, as well as reconstruct known historical events such as strong El Ni\~no and La Ni\~na with very little given information. Our method outperforms the widely used statistical kriging method as well as other recent machine learning approaches. The model generalizes to higher resolutions than the ones it was trained on and can be used on a variety of climate fields. Moreover, it allows inpainting of masks never seen before during the model training.
Related papers
- Forget Me Not: Fighting Local Overfitting with Knowledge Fusion and Distillation [6.7864586321550595]
We introduce a novel score that measures the forgetting rate of deep models on validation data.<n>We demonstrate that local overfitting can arise even without conventional overfitting.<n>We then introduce a two-stage approach that leverages the training history of a single model to recover and retain forgotten knowledge.
arXiv Detail & Related papers (2025-07-11T15:37:24Z) - On Local Overfitting and Forgetting in Deep Neural Networks [6.7864586321550595]
We propose a novel score that captures the forgetting rate of deep models on validation data.
We show that local overfitting occurs regardless of the presence of traditional overfitting.
We devise a new ensemble method that aims to recover forgotten knowledge, relying solely on the training history of a single network.
arXiv Detail & Related papers (2024-12-17T14:53:38Z) - Adaptive Memory Replay for Continual Learning [29.333341368722653]
Updating Foundation Models as new data becomes available can lead to catastrophic forgetting'
We introduce a framework of adaptive memory replay for continual learning, where sampling of past data is phrased as a multi-armed bandit problem.
We demonstrate the effectiveness of our approach, which maintains high performance while reducing forgetting by up to 10% at no training efficiency cost.
arXiv Detail & Related papers (2024-04-18T22:01:56Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - LARA: A Light and Anti-overfitting Retraining Approach for Unsupervised
Time Series Anomaly Detection [49.52429991848581]
We propose a Light and Anti-overfitting Retraining Approach (LARA) for deep variational auto-encoder based time series anomaly detection methods (VAEs)
This work aims to make three novel contributions: 1) the retraining process is formulated as a convex problem and can converge at a fast rate as well as prevent overfitting; 2) designing a ruminate block, which leverages the historical data without the need to store them; and 3) mathematically proving that when fine-tuning the latent vector and reconstructed data, the linear formations can achieve the least adjusting errors between the ground truths and the fine-tuned ones.
arXiv Detail & Related papers (2023-10-09T12:36:16Z) - Interpolation of mountain weather forecasts by machine learning [0.0]
This paper proposes a method that uses machine learning to interpolate future weather in mountainous regions.
We focus on mountainous regions in Japan and predict temperature and precipitation mainly using LightGBM as a machine learning model.
arXiv Detail & Related papers (2023-08-27T01:32:23Z) - Continual Face Forgery Detection via Historical Distribution Preserving [88.66313037412846]
We focus on a novel and challenging problem: Continual Face Forgery Detection (CFFD)
CFFD aims to efficiently learn from new forgery attacks without forgetting previous ones.
Our experiments on the benchmarks show that our method outperforms the state-of-the-art competitors.
arXiv Detail & Related papers (2023-08-11T16:37:31Z) - An evaluation of deep learning models for predicting water depth
evolution in urban floods [59.31940764426359]
We compare different deep learning models for prediction of water depth at high spatial resolution.
Deep learning models are trained to reproduce the data simulated by the CADDIES cellular-automata flood model.
Our results show that the deep learning models present in general lower errors compared to the other methods.
arXiv Detail & Related papers (2023-02-20T16:08:54Z) - ClimaX: A foundation model for weather and climate [51.208269971019504]
ClimaX is a deep learning model for weather and climate science.
It can be pre-trained with a self-supervised learning objective on climate datasets.
It can be fine-tuned to address a breadth of climate and weather tasks.
arXiv Detail & Related papers (2023-01-24T23:19:01Z) - Spatiotemporal modeling of European paleoclimate using doubly sparse
Gaussian processes [61.31361524229248]
We build on recent scale sparsetemporal GPs to reduce the computational burden.
We successfully employ such a doubly sparse GP to construct a probabilistic model of paleoclimate.
arXiv Detail & Related papers (2022-11-15T14:15:04Z) - Climate-Invariant Machine Learning [0.8831201550856289]
Current climate models require representations of processes that occur at scales smaller than model grid size.
Recent machine learning (ML) algorithms hold promise to improve such process representations, but tend to extrapolate poorly to climate regimes they were not trained on.
We propose a new framework - termed "climate-invariant" ML - incorporating knowledge of climate processes into ML algorithms.
arXiv Detail & Related papers (2021-12-14T07:02:57Z) - SubseasonalClimateUSA: A Dataset for Subseasonal Forecasting and
Benchmarking [20.442879707675115]
SubseasonalClimateUSA is a curated dataset for training and benchmarking subseasonal forecasting models in the United States.
We use this dataset to benchmark a diverse suite of models, including operational dynamical models, classical meteorological baselines, and ten state-of-the-art machine learning and deep learning-based methods from the literature.
arXiv Detail & Related papers (2021-09-21T18:42:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.