DL4DS -- Deep Learning for empirical DownScaling
- URL: http://arxiv.org/abs/2205.08967v1
- Date: Sat, 7 May 2022 11:24:43 GMT
- Title: DL4DS -- Deep Learning for empirical DownScaling
- Authors: Carlos Alberto Gomez Gonzalez
- Abstract summary: This paper presents DL4DS, a python library that implements a variety of state-of-the-art and novel algorithms for downscaling gridded Earth Science data with deep neural networks.
We showcase the capabilities of DL4DS on air quality CAMS data over the western Mediterranean area.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A common task in Earth Sciences is to infer climate information at local and
regional scales from global climate models. Dynamical downscaling requires
running expensive numerical models at high resolution which can be prohibitive
due to long model runtimes. On the other hand, statistical downscaling
techniques present an alternative approach for learning links between the
large- and local-scale climate in a more efficient way. A large number of deep
neural network-based approaches for statistical downscaling have been proposed
in recent years, mostly based on convolutional architectures developed for
computer vision and super-resolution tasks. This paper presents DL4DS, Deep
Learning for empirical DownScaling, a python library that implements a wide
variety of state-of-the-art and novel algorithms for downscaling gridded Earth
Science data with deep neural networks. DL4DS has been designed with the goal
of providing a general framework for training convolutional neural networks
with configurable architectures and learning strategies to facilitate the
conduction of comparative and ablation studies in a robust way. We showcase the
capabilities of DL4DS on air quality CAMS data over the western Mediterranean
area. The DL4DS library can be found in this repository:
https://github.com/carlos-gg/dl4ds
Related papers
- Quanv4EO: Empowering Earth Observation by means of Quanvolutional Neural Networks [62.12107686529827]
This article highlights a significant shift towards leveraging quantum computing techniques in processing large volumes of remote sensing data.
The proposed Quanv4EO model introduces a quanvolution method for preprocessing multi-dimensional EO data.
Key findings suggest that the proposed model not only maintains high precision in image classification but also shows improvements of around 5% in EO use cases.
arXiv Detail & Related papers (2024-07-24T09:11:34Z) - Layer-wise Linear Mode Connectivity [52.6945036534469]
Averaging neural network parameters is an intuitive method for the knowledge of two independent models.
It is most prominently used in federated learning.
We analyse the performance of the models that result from averaging single, or groups.
arXiv Detail & Related papers (2023-07-13T09:39:10Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Human Trajectory Prediction via Neural Social Physics [63.62824628085961]
Trajectory prediction has been widely pursued in many fields, and many model-based and model-free methods have been explored.
We propose a new method combining both methodologies based on a new Neural Differential Equation model.
Our new model (Neural Social Physics or NSP) is a deep neural network within which we use an explicit physics model with learnable parameters.
arXiv Detail & Related papers (2022-07-21T12:11:18Z) - Transfer Learning with Deep Tabular Models [66.67017691983182]
We show that upstream data gives tabular neural networks a decisive advantage over GBDT models.
We propose a realistic medical diagnosis benchmark for tabular transfer learning.
We propose a pseudo-feature method for cases where the upstream and downstream feature sets differ.
arXiv Detail & Related papers (2022-06-30T14:24:32Z) - FedDKD: Federated Learning with Decentralized Knowledge Distillation [3.9084449541022055]
We propose a novel framework of federated learning equipped with the process of decentralized knowledge distillation (FedDKD)
We show that FedDKD outperforms the state-of-the-art methods with more efficient communication and training in a few DKD steps.
arXiv Detail & Related papers (2022-05-02T07:54:07Z) - Deep-learning-based upscaling method for geologic models via
theory-guided convolutional neural network [0.0]
A deep convolutional neural network (CNN) is trained to approximate the relationship between the coarse grid of hydraulic conductivity fields and the hydraulic heads.
With the physical information considered, dependence on the data volume of training the deep CNN model can be reduced greatly.
arXiv Detail & Related papers (2021-12-31T08:10:48Z) - An optimised deep spiking neural network architecture without gradients [7.183775638408429]
We present an end-to-end trainable modular event-driven neural architecture that uses local synaptic and threshold adaptation rules.
The architecture represents a highly abstracted model of existing Spiking Neural Network (SNN) architectures.
arXiv Detail & Related papers (2021-09-27T05:59:12Z) - A Fortran-Keras Deep Learning Bridge for Scientific Computing [6.768544973019004]
We introduce a software library, the Fortran-Keras Bridge (FKB)
The paper describes several unique features offered by FKB, such as customizable layers, loss functions, and network ensembles.
The paper concludes with a case study that applies FKB to address open questions about the robustness of an experimental approach to global climate simulation.
arXiv Detail & Related papers (2020-04-14T15:10:09Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.