On the modern deep learning approaches for precipitation downscaling
- URL: http://arxiv.org/abs/2207.00808v1
- Date: Sat, 2 Jul 2022 11:57:39 GMT
- Title: On the modern deep learning approaches for precipitation downscaling
- Authors: Bipin Kumar, Kaustubh Atey, Bhupendra Bahadur Singh, Rajib
Chattopadhyay, Nachiket Acharya, Manmeet Singh, Ravi S. Nanjundiah, and
Suryachandra A. Rao
- Abstract summary: We carry out the DL-based downscaling to estimate the local precipitation data from the India Meteorological Department (IMD)
To test the efficacy of different DL approaches, we apply four different methods of downscaling and evaluate their performance.
The results indicate that SR-GAN is the best method for precipitation data downscaling.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning (DL) based downscaling has become a popular tool in earth
sciences recently. Increasingly, different DL approaches are being adopted to
downscale coarser precipitation data and generate more accurate and reliable
estimates at local (~few km or even smaller) scales. Despite several studies
adopting dynamical or statistical downscaling of precipitation, the accuracy is
limited by the availability of ground truth. A key challenge to gauge the
accuracy of such methods is to compare the downscaled data to point-scale
observations which are often unavailable at such small scales. In this work, we
carry out the DL-based downscaling to estimate the local precipitation data
from the India Meteorological Department (IMD), which was created by
approximating the value from station location to a grid point. To test the
efficacy of different DL approaches, we apply four different methods of
downscaling and evaluate their performance. The considered approaches are (i)
Deep Statistical Downscaling (DeepSD), augmented Convolutional Long Short Term
Memory (ConvLSTM), fully convolutional network (U-NET), and Super-Resolution
Generative Adversarial Network (SR-GAN). A custom VGG network, used in the
SR-GAN, is developed in this work using precipitation data. The results
indicate that SR-GAN is the best method for precipitation data downscaling. The
downscaled data is validated with precipitation values at IMD station. This DL
method offers a promising alternative to statistical downscaling.
Related papers
- Comparison of machine learning algorithms for merging gridded satellite
and earth-observed precipitation data [7.434517639563671]
We use monthly earth-observed precipitation data from the Global Historical Climatology Network monthly database, version 2.
Results suggest that extreme gradient boosting and random forests are the most accurate in terms of the squared error scoring function.
arXiv Detail & Related papers (2022-12-17T09:39:39Z) - Minimizing the Accumulated Trajectory Error to Improve Dataset
Distillation [151.70234052015948]
We propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.
We show that the weights trained on synthetic data are robust against the accumulated errors perturbations with the regularization towards the flat trajectory.
Our method, called Flat Trajectory Distillation (FTD), is shown to boost the performance of gradient-matching methods by up to 4.7%.
arXiv Detail & Related papers (2022-11-20T15:49:11Z) - DL4DS -- Deep Learning for empirical DownScaling [0.0]
This paper presents DL4DS, a python library that implements a variety of state-of-the-art and novel algorithms for downscaling gridded Earth Science data with deep neural networks.
We showcase the capabilities of DL4DS on air quality CAMS data over the western Mediterranean area.
arXiv Detail & Related papers (2022-05-07T11:24:43Z) - HYDRA: Hypergradient Data Relevance Analysis for Interpreting Deep
Neural Networks [51.143054943431665]
We propose Hypergradient Data Relevance Analysis, or HYDRA, which interprets predictions made by deep neural networks (DNNs) as effects of their training data.
HYDRA assesses the contribution of training data toward test data points throughout the training trajectory.
In addition, we quantitatively demonstrate that HYDRA outperforms influence functions in accurately estimating data contribution and detecting noisy data labels.
arXiv Detail & Related papers (2021-02-04T10:00:13Z) - Attentional-Biased Stochastic Gradient Descent [74.49926199036481]
We present a provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning.
Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
ABSGD is flexible enough to combine with other robust losses without any additional cost.
arXiv Detail & Related papers (2020-12-13T03:41:52Z) - Deep-learning based down-scaling of summer monsoon rainfall data over
Indian region [0.0]
Dynamical and statistical downscaling models are often used to get information at high-resolution gridded data over larger domains.
Deep Learning (DL) based methods provide an efficient solution in downscaling rainfall data for regional climate forecasting and real-time rainfall observation data at high spatial resolutions.
In this work, we employed three deep learning-based algorithms derived from the super-resolution convolutional neural network (SRCNN) methods, to produce 4x-times high-resolution downscaled rainfall data during the summer monsoon season.
arXiv Detail & Related papers (2020-11-23T10:24:17Z) - Dynamically Sampled Nonlocal Gradients for Stronger Adversarial Attacks [3.055601224691843]
The vulnerability of deep neural networks to small and even imperceptible perturbations has become a central topic in deep learning research.
We propose Dynamically Dynamically Nonlocal Gradient Descent (DSNGD) as a vulnerability defense mechanism.
We show that DSNGD-based attacks are average 35% faster while achieving 0.9% to 27.1% higher success rates compared to their gradient descent-based counterparts.
arXiv Detail & Related papers (2020-11-05T08:55:24Z) - ClimAlign: Unsupervised statistical downscaling of climate variables via
normalizing flows [0.7734726150561086]
We present ClimAlign, a novel method for unsupervised, generative downscaling using adaptations of recent work in normalizing for variational inference.
We show that our method achieves comparable predictive performance to existing supervised downscaling methods while simultaneously allowing for both conditional and unconditional sampling from the joint distribution over high and low resolution spatial fields.
arXiv Detail & Related papers (2020-08-11T13:01:53Z) - Variable Skipping for Autoregressive Range Density Estimation [84.60428050170687]
We show a technique, variable skipping, for accelerating range density estimation over deep autoregressive models.
We show that variable skipping provides 10-100$times$ efficiency improvements when targeting challenging high-quantile error metrics.
arXiv Detail & Related papers (2020-07-10T19:01:40Z) - JHU-CROWD++: Large-Scale Crowd Counting Dataset and A Benchmark Method [92.15895515035795]
We introduce a new large scale unconstrained crowd counting dataset (JHU-CROWD++) that contains "4,372" images with "1.51 million" annotations.
We propose a novel crowd counting network that progressively generates crowd density maps via residual error estimation.
arXiv Detail & Related papers (2020-04-07T14:59:35Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.