Evaluating the transferability potential of deep learning models for climate downscaling
- URL: http://arxiv.org/abs/2407.12517v1
- Date: Wed, 17 Jul 2024 12:10:24 GMT
- Title: Evaluating the transferability potential of deep learning models for climate downscaling
- Authors: Ayush Prasad, Paula Harder, Qidong Yang, Prasanna Sattegeri, Daniela Szwarcman, Campbell Watson, David Rolnick,
- Abstract summary: We evaluate the efficacy of training deep learning downscaling models on multiple climate datasets to learn more robust and transferable representations.
We assess the spatial, variable, and product transferability of downscaling models experimentally, to understand the generalizability of these different architecture types.
- Score: 16.30722178785489
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Climate downscaling, the process of generating high-resolution climate data from low-resolution simulations, is essential for understanding and adapting to climate change at regional and local scales. Deep learning approaches have proven useful in tackling this problem. However, existing studies usually focus on training models for one specific task, location and variable, which are therefore limited in their generalizability and transferability. In this paper, we evaluate the efficacy of training deep learning downscaling models on multiple diverse climate datasets to learn more robust and transferable representations. We evaluate the effectiveness of architectures zero-shot transferability using CNNs, Fourier Neural Operators (FNOs), and vision Transformers (ViTs). We assess the spatial, variable, and product transferability of downscaling models experimentally, to understand the generalizability of these different architecture types.
Related papers
- Towards Physically Consistent Deep Learning For Climate Model Parameterizations [46.07009109585047]
We propose an efficient supervised learning framework for deep learning-based parameterizations.
We show that our method robustly identifies a small subset of the inputs as actual physical drivers.
Our framework represents a crucial step in addressing a major challenge in data-driven climate model parameterizations.
arXiv Detail & Related papers (2024-06-06T10:02:49Z) - MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and Modalities [72.68829963458408]
We present MergeNet, which learns to bridge the gap of parameter spaces of heterogeneous models.
The core mechanism of MergeNet lies in the parameter adapter, which operates by querying the source model's low-rank parameters.
MergeNet is learned alongside both models, allowing our framework to dynamically transfer and adapt knowledge relevant to the current stage.
arXiv Detail & Related papers (2024-04-20T08:34:39Z) - Generating High-Resolution Regional Precipitation Using Conditional
Diffusion Model [7.784934642915291]
This paper presents a deep generative model for downscaling climate data, specifically precipitation on a regional scale.
We employ a denoising diffusion probabilistic model conditioned on multiple LR climate variables.
Our results demonstrate significant improvements over existing baselines, underscoring the effectiveness of the conditional diffusion model in downscaling climate data.
arXiv Detail & Related papers (2023-12-12T09:39:52Z) - Exploring Model Transferability through the Lens of Potential Energy [78.60851825944212]
Transfer learning has become crucial in computer vision tasks due to the vast availability of pre-trained deep learning models.
Existing methods for measuring the transferability of pre-trained models rely on statistical correlations between encoded static features and task labels.
We present an insightful physics-inspired approach named PED to address these challenges.
arXiv Detail & Related papers (2023-08-29T07:15:57Z) - ClimateLearn: Benchmarking Machine Learning for Weather and Climate
Modeling [20.63843548201849]
ClimateLearn is an open-source library that vastly simplifies the training and evaluation of machine learning models for data-driven climate science.
It is the first large-scale, open-source effort for bridging research in weather and climate modeling with modern machine learning systems.
arXiv Detail & Related papers (2023-07-04T20:36:01Z) - Spherical Fourier Neural Operators: Learning Stable Dynamics on the
Sphere [53.63505583883769]
We introduce Spherical FNOs (SFNOs) for learning operators on spherical geometries.
SFNOs have important implications for machine learning-based simulation of climate dynamics.
arXiv Detail & Related papers (2023-06-06T16:27:17Z) - Climate Intervention Analysis using AI Model Guided by Statistical
Physics Principles [6.824166358727082]
We propose a novel solution by utilizing a principle from statistical physics known as the Fluctuation-Dissipation Theorem (FDT)
By leveraging, we are able to extract information encoded in a large dataset produced by Earth System Models.
Our model, AiBEDO, is capable of capturing the complex, multi-timescale effects of radiation perturbations on global and regional surface climate.
arXiv Detail & Related papers (2023-02-07T05:09:10Z) - ClimaX: A foundation model for weather and climate [51.208269971019504]
ClimaX is a deep learning model for weather and climate science.
It can be pre-trained with a self-supervised learning objective on climate datasets.
It can be fine-tuned to address a breadth of climate and weather tasks.
arXiv Detail & Related papers (2023-01-24T23:19:01Z) - Multi-scale Digital Twin: Developing a fast and physics-informed
surrogate model for groundwater contamination with uncertain climate models [53.44486283038738]
Climate change exacerbates the long-term soil management problem of groundwater contamination.
We develop a physics-informed machine learning surrogate model using U-Net enhanced Fourier Neural Contaminated (PDENO)
In parallel, we develop a convolutional autoencoder combined with climate data to reduce the dimensionality of climatic region similarities across the United States.
arXiv Detail & Related papers (2022-11-20T06:46:35Z) - Climate-Invariant Machine Learning [0.8831201550856289]
Current climate models require representations of processes that occur at scales smaller than model grid size.
Recent machine learning (ML) algorithms hold promise to improve such process representations, but tend to extrapolate poorly to climate regimes they were not trained on.
We propose a new framework - termed "climate-invariant" ML - incorporating knowledge of climate processes into ML algorithms.
arXiv Detail & Related papers (2021-12-14T07:02:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.