Ensemble learning for blending gridded satellite and gauge-measured
precipitation data
- URL: http://arxiv.org/abs/2307.06840v2
- Date: Sat, 14 Oct 2023 17:15:07 GMT
- Title: Ensemble learning for blending gridded satellite and gauge-measured
precipitation data
- Authors: Georgia Papacharalampous, Hristos Tyralis, Nikolaos Doulamis,
Anastasios Doulamis
- Abstract summary: This study proposes 11 new ensemble learners for improving the accuracy of satellite precipitation products.
We apply the ensemble learners to monthly data from the PERSIANN and IMERG gridded datasets.
We also use gauge-measured precipitation data from the Global Historical Climatology Network monthly database.
- Score: 4.2193475197905705
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Regression algorithms are regularly used for improving the accuracy of
satellite precipitation products. In this context, satellite precipitation and
topography data are the predictor variables, and gauged-measured precipitation
data are the dependent variables. Alongside this, it is increasingly recognised
in many fields that combinations of algorithms through ensemble learning can
lead to substantial predictive performance improvements. Still, a sufficient
number of ensemble learners for improving the accuracy of satellite
precipitation products and their large-scale comparison are currently missing
from the literature. In this study, we work towards filling in this specific
gap by proposing 11 new ensemble learners in the field and by extensively
comparing them. We apply the ensemble learners to monthly data from the
PERSIANN (Precipitation Estimation from Remotely Sensed Information using
Artificial Neural Networks) and IMERG (Integrated Multi-satellitE Retrievals
for GPM) gridded datasets that span over a 15-year period and over the entire
the contiguous United States (CONUS). We also use gauge-measured precipitation
data from the Global Historical Climatology Network monthly database, version 2
(GHCNm). The ensemble learners combine the predictions of six machine learning
regression algorithms (base learners), namely the multivariate adaptive
regression splines (MARS), multivariate adaptive polynomial splines
(poly-MARS), random forests (RF), gradient boosting machines (GBM), extreme
gradient boosting (XGBoost) and Bayesian regularized neural networks (BRNN),
and each of them is based on a different combiner. The combiners include the
equal-weight combiner, the median combiner, two best learners and seven
variants of a sophisticated stacking method. The latter stacks a regression
algorithm on top of the base learners to combine their independent
predictions...
Related papers
- Tackling Data Heterogeneity in Federated Time Series Forecasting [61.021413959988216]
Time series forecasting plays a critical role in various real-world applications, including energy consumption prediction, disease transmission monitoring, and weather forecasting.
Most existing methods rely on a centralized training paradigm, where large amounts of data are collected from distributed devices to a central cloud server.
We propose a novel framework, Fed-TREND, to address data heterogeneity by generating informative synthetic data as auxiliary knowledge carriers.
arXiv Detail & Related papers (2024-11-24T04:56:45Z) - Uncertainty estimation in satellite precipitation spatial prediction by combining distributional regression algorithms [3.8623569699070353]
We introduce the concept of distributional regression for the engineering task of creating precipitation datasets through data merging.
We propose new ensemble learning methods that can be valuable not only for spatial prediction but also for prediction problems in general.
arXiv Detail & Related papers (2024-06-29T05:58:00Z) - Uncertainty estimation in spatial interpolation of satellite precipitation with ensemble learning [3.8623569699070353]
We introduce nine quantile-based ensemble learners and apply them to large precipitation datasets.
Our ensemble learners include six stacking and three simple methods (mean, median, best combiner)
Stacking with QR and QRNN yielded the best results across quantile levels of interest.
arXiv Detail & Related papers (2024-03-14T17:45:56Z) - Hierarchically Coherent Multivariate Mixture Networks [11.40498954142061]
Probabilistic coherent forecasting is tasked to produce forecasts consistent across levels of aggregation.
We optimize the networks with a composite likelihood objective, allowing us to capture time series' relationships.
Our approach demonstrates 13.2% average accuracy improvements on most datasets compared to state-of-the-art baselines.
arXiv Detail & Related papers (2023-05-11T18:52:11Z) - Comparison of tree-based ensemble algorithms for merging satellite and
earth-observed precipitation data at the daily time scale [7.434517639563671]
Merging satellite products and ground-based measurements is often required for obtaining precipitation datasets that simultaneously cover large regions with high density.
Machine and statistical learning regression algorithms are regularly utilized in this endeavour.
Tree-based ensemble algorithms for regression are adopted in various fields for solving algorithmic problems with high accuracy and low computational cost.
arXiv Detail & Related papers (2022-12-31T11:14:45Z) - Comparison of machine learning algorithms for merging gridded satellite
and earth-observed precipitation data [7.434517639563671]
We use monthly earth-observed precipitation data from the Global Historical Climatology Network monthly database, version 2.
Results suggest that extreme gradient boosting and random forests are the most accurate in terms of the squared error scoring function.
arXiv Detail & Related papers (2022-12-17T09:39:39Z) - Learning with MISELBO: The Mixture Cookbook [62.75516608080322]
We present the first ever mixture of variational approximations for a normalizing flow-based hierarchical variational autoencoder (VAE) with VampPrior and a PixelCNN decoder network.
We explain this cooperative behavior by drawing a novel connection between VI and adaptive importance sampling.
We obtain state-of-the-art results among VAE architectures in terms of negative log-likelihood on the MNIST and FashionMNIST datasets.
arXiv Detail & Related papers (2022-09-30T15:01:35Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - Improving Generalization in Reinforcement Learning with Mixture
Regularization [113.12412071717078]
We introduce a simple approach, named mixreg, which trains agents on a mixture of observations from different training environments.
Mixreg increases the data diversity more effectively and helps learn smoother policies.
Results show mixreg outperforms the well-established baselines on unseen testing environments by a large margin.
arXiv Detail & Related papers (2020-10-21T08:12:03Z) - Recent Developments Combining Ensemble Smoother and Deep Generative
Networks for Facies History Matching [58.720142291102135]
This research project focuses on the use of autoencoders networks to construct a continuous parameterization for facies models.
We benchmark seven different formulations, including VAE, generative adversarial network (GAN), Wasserstein GAN, variational auto-encoding GAN, principal component analysis (PCA) with cycle GAN, PCA with transfer style network and VAE with style loss.
arXiv Detail & Related papers (2020-05-08T21:32:42Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.