Global Precipitation Nowcasting of Integrated Multi-satellitE Retrievals
for GPM: A U-Net Convolutional LSTM Architecture
- URL: http://arxiv.org/abs/2307.10843v2
- Date: Fri, 2 Feb 2024 22:51:17 GMT
- Title: Global Precipitation Nowcasting of Integrated Multi-satellitE Retrievals
for GPM: A U-Net Convolutional LSTM Architecture
- Authors: Reyhaneh Rahimi, Praveen Ravirathinam, Ardeshir Ebtehaj, Ali Behrangi,
Jackson Tan, Vipin Kumar
- Abstract summary: This paper presents a deep learning architecture for nowcasting of precipitation almost globally every 30 min with a 4-hour lead time.
The architecture fuses a U-Net and a convolutional long short-term memory (LSTM) neural network.
It is trained using data from the Integrated MultisatellitE Retrievals for GPM (IMERG) and a few key precipitation drivers from the Global Forecast System (GFS)
- Score: 3.5776345196917254
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a deep learning architecture for nowcasting of
precipitation almost globally every 30 min with a 4-hour lead time. The
architecture fuses a U-Net and a convolutional long short-term memory (LSTM)
neural network and is trained using data from the Integrated MultisatellitE
Retrievals for GPM (IMERG) and a few key precipitation drivers from the Global
Forecast System (GFS). The impacts of different training loss functions,
including the mean-squared error (regression) and the focal-loss
(classification), on the quality of precipitation nowcasts are studied. The
results indicate that the regression network performs well in capturing light
precipitation (below 1.6 mm/hr), but the classification network can outperform
the regression network for nowcasting of precipitation extremes (>8 mm/hr), in
terms of the critical success index (CSI).. Using the Wasserstein distance, it
is shown that the predicted precipitation by the classification network has a
closer class probability distribution to the IMERG than the regression network.
It is uncovered that the inclusion of the physical variables can improve
precipitation nowcasting, especially at longer lead times in both networks.
Taking IMERG as a relative reference, a multi-scale analysis in terms of
fractions skill score (FSS), shows that the nowcasting machine remains skillful
(FSS > 0.5) at the resolution of 10 km compared to 50 km for GFS. For
precipitation rates greater than 4~mm/hr, only the classification network
remains FSS-skillful on scales greater than 50 km within a 2-hour lead time.
Related papers
- MAD-SmaAt-GNet: A Multimodal Advection-Guided Neural Network for Precipitation Nowcasting [2.0912407740405903]
Deep learning models have shown strong potential for precipitation nowcasting, offering both accuracy and computational efficiency.<n>This paper introduces the Multimodal Advection-Guided Small Attention GNet (MAD-SmaAt-GNet)<n>MAD-SmaAt-GNet reduces the mean squared error (MSE) by 8.9% compared with the baseline SmaAt-UNet for four-step precipitation forecasting up to four hours ahead.
arXiv Detail & Related papers (2026-03-03T10:32:15Z) - Precipitation nowcasting of satellite data using physically-aligned neural networks [1.8468488572500306]
TUPANN is a satellite-only model trained on GOES-16 RRQPE.<n>It decomposes the forecast into physically meaningful components.<n>TUPANN achieves the best or second-best skill in most settings.
arXiv Detail & Related papers (2025-11-07T18:33:40Z) - CSU-PCAST: A Dual-Branch Transformer Framework for medium-range ensemble Precipitation Forecasting [6.540270371082014]
This study develops a deep learning-based ensemble framework for multi-step precipitation prediction.<n>The architecture employs a patch-based Swin Transformer backbone with periodic convolutions to handle longitudinal continuity.<n>Training minimizes a hybrid loss combining the Continuous Ranked Probability Score (CRPS) and weighted log1p mean squared error (log1pMSE)
arXiv Detail & Related papers (2025-10-23T17:43:38Z) - Hybrid Quantum Recurrent Neural Network For Remaining Useful Life Prediction [67.410870290301]
We introduce a Hybrid Quantum Recurrent Neural Network framework, combining Quantum Long Short-Term Memory layers with classical dense layers for Remaining Useful Life forecasting.
Experimental results demonstrate that, despite having fewer trainable parameters, the Hybrid Quantum Recurrent Neural Network achieves up to a 5% improvement over a Recurrent Neural Network.
arXiv Detail & Related papers (2025-04-29T14:41:41Z) - DYffCast: Regional Precipitation Nowcasting Using IMERG Satellite Data. A case study over South America [3.583227696181354]
The ability to accurately nowcast precipitation is becoming more critical for safeguarding society.
Motivated by the recent success of generative models at precipitation nowcasting, this paper: extends the DYffusion framework to this task.
It modifies the DYffusion framework to improve its ability to model rainfall data; and introduces a novel loss function that combines MSE, MAE and the LPIPS perceptual score.
arXiv Detail & Related papers (2024-12-02T22:20:31Z) - Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - A Robust Machine Learning Approach for Path Loss Prediction in 5G
Networks with Nested Cross Validation [0.6138671548064356]
We utilize machine learning (ML) methods, which overcome conventional path loss prediction models, for path loss prediction in a 5G network system.
First, we acquire a dataset obtained through a comprehensive measurement campaign conducted in an urban macro-cell scenario located in Beijing, China.
We deploy Support Vector Regression (SVR), CatBoost Regression (CBR), eXtreme Gradient Boosting Regression (XGBR), Artificial Neural Network (ANN), and Random Forest (RF) methods to predict the path loss, and compare the prediction results in terms of Mean Absolute Error (MAE) and Mean Square Error (MSE)
arXiv Detail & Related papers (2023-10-02T09:21:58Z) - Deep Learning for Day Forecasts from Sparse Observations [60.041805328514876]
Deep neural networks offer an alternative paradigm for modeling weather conditions.
MetNet-3 learns from both dense and sparse data sensors and makes predictions up to 24 hours ahead for precipitation, wind, temperature and dew point.
MetNet-3 has a high temporal and spatial resolution, respectively, up to 2 minutes and 1 km as well as a low operational latency.
arXiv Detail & Related papers (2023-06-06T07:07:54Z) - Efficient Traffic State Forecasting using Spatio-Temporal Network
Dependencies: A Sparse Graph Neural Network Approach [6.203371866342754]
Traffic prediction in a transportation network is paramount for effective traffic operations and management.
Long-term traffic prediction (beyond 30 minutes into the future) remains challenging in current research.
We propose sparse training to the training cost, while preserving the prediction accuracy.
arXiv Detail & Related papers (2022-11-06T05:41:39Z) - Magic ELF: Image Deraining Meets Association Learning and Transformer [63.761812092934576]
This paper aims to unify CNN and Transformer to take advantage of their learning merits for image deraining.
A novel multi-input attention module (MAM) is proposed to associate rain removal and background recovery.
Our proposed method (dubbed as ELF) outperforms the state-of-the-art approach (MPRNet) by 0.25 dB on average.
arXiv Detail & Related papers (2022-07-21T12:50:54Z) - Switching in the Rain: Predictive Wireless x-haul Network
Reconfiguration [17.891837432766764]
Wireless x-haul networks rely on microwave and millimeter-wave links between 4G and/or 5G base-stations to support ultra-high data rate and ultra-low latency.
precipitation may cause severe signal attenuation, which significantly degrades the network performance.
We develop a Predictive Network Reconfiguration framework that uses historical data to predict the future condition of each link and then prepares the network ahead of time for imminent disturbances.
arXiv Detail & Related papers (2022-03-07T13:40:38Z) - Learning Fast and Slow for Online Time Series Forecasting [76.50127663309604]
Fast and Slow learning Networks (FSNet) is a holistic framework for online time-series forecasting.
FSNet balances fast adaptation to recent changes and retrieving similar old knowledge.
Our code will be made publicly available.
arXiv Detail & Related papers (2022-02-23T18:23:07Z) - FourCastNet: A Global Data-driven High-resolution Weather Model using
Adaptive Fourier Neural Operators [45.520430157112884]
FourCastNet accurately forecasts high-resolution, fast-timescale variables such as the surface wind speed, precipitation, and atmospheric water vapor.
It has important implications for planning wind energy resources, predicting extreme weather events such as tropical cyclones, extra-tropical cyclones, and atmospheric rivers.
FourCastNet generates a week-long forecast in less than 2 seconds, orders of magnitude faster than IFS.
arXiv Detail & Related papers (2022-02-22T22:19:35Z) - Nowcasting-Nets: Deep Neural Network Structures for Precipitation
Nowcasting Using IMERG [1.9860735109145415]
We use Recurrent and Convolutional deep neural network structures to address the challenge of precipitation nowcasting.
A total of five models are trained using Global Precipitation Measurement (GPM) Integrated Multi-satellitE Retrievals for GPM (IMERG) precipitation data over the Eastern Contiguous United States (CONUS)
The models were designed to provide forecasts with a lead time of up to 1.5 hours and, by using a feedback loop approach, the ability of the models to extend the forecast time to 4.5 hours was also investigated.
arXiv Detail & Related papers (2021-08-16T02:55:32Z) - Neural Networks and Value at Risk [59.85784504799224]
We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
arXiv Detail & Related papers (2020-05-04T17:41:59Z) - Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group Averaging [48.99717153937717]
We present WAGMA-SGD, a wait-avoiding subgroup that reduces global communication via weight exchange.<n>We train ResNet-50 on ImageNet; Transformer for machine translation; and deep reinforcement learning for navigation at scale.<n>Compared with state-of-the-art decentralized SGD variants, WAGMA-SGD significantly improves training throughput.
arXiv Detail & Related papers (2020-04-30T22:11:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.