Fast and accurate learned multiresolution dynamical downscaling for
precipitation
- URL: http://arxiv.org/abs/2101.06813v1
- Date: Mon, 18 Jan 2021 00:25:04 GMT
- Title: Fast and accurate learned multiresolution dynamical downscaling for
precipitation
- Authors: Jiali Wang, Zhengchun Liu, Ian Foster, Won Chang, Rajkumar Kettimuthu,
Rao Kotamarthi
- Abstract summary: We use combination of low- and high- resolution simulations to train a neural network to map from the former to the latter.
We train each CNN type both with a conventional loss function, such as mean square error (MSE) and with a conditional generative adversarial network (CGAN)
We compare the four new CNN-derived high-resolution precipitation results with precipitation generated from original high resolution simulations, a bilinear interpolater and the state-of-the-art CNN-based super-resolution (SR) technique.
- Score: 0.9786690381850356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study develops a neural network-based approach for emulating
high-resolution modeled precipitation data with comparable statistical
properties but at greatly reduced computational cost. The key idea is to use
combination of low- and high- resolution simulations to train a neural network
to map from the former to the latter. Specifically, we define two types of
CNNs, one that stacks variables directly and one that encodes each variable
before stacking, and we train each CNN type both with a conventional loss
function, such as mean square error (MSE), and with a conditional generative
adversarial network (CGAN), for a total of four CNN variants. We compare the
four new CNN-derived high-resolution precipitation results with precipitation
generated from original high resolution simulations, a bilinear interpolater
and the state-of-the-art CNN-based super-resolution (SR) technique. Results
show that the SR technique produces results similar to those of the bilinear
interpolator with smoother spatial and temporal distributions and smaller data
variabilities and extremes than the original high resolution simulations. While
the new CNNs trained by MSE generate better results over some regions than the
interpolator and SR technique do, their predictions are still not as close as
the original high resolution simulations. The CNNs trained by CGAN generate
more realistic and physically reasonable results, better capturing not only
data variability in time and space but also extremes such as intense and
long-lasting storms. The new proposed CNN-based downscaling approach can
downscale precipitation from 50~km to 12~km in 14~min for 30~years once the
network is trained (training takes 4~hours using 1~GPU), while the conventional
dynamical downscaling would take 1~month using 600 CPU cores to generate
simulations at the resolution of 12~km over contiguous United States.
Related papers
- Reusing Convolutional Neural Network Models through Modularization and
Composition [22.823870645316397]
We propose two modularization approaches named CNNSplitter and GradSplitter.
CNNSplitter decomposes a trained convolutional neural network (CNN) model into $N$ small reusable modules.
The resulting modules can be reused to patch existing CNN models or build new CNN models through composition.
arXiv Detail & Related papers (2023-11-08T03:18:49Z) - Improving Urban Flood Prediction using LSTM-DeepLabv3+ and Bayesian
Optimization with Spatiotemporal feature fusion [7.790241122137617]
This study presented a CNN-RNN hybrid feature fusion modelling approach for urban flood prediction.
It integrated the strengths of CNNs in processing spatial features and RNNs in analyzing different dimensions of time sequences.
arXiv Detail & Related papers (2023-04-19T22:00:04Z) - A predictive physics-aware hybrid reduced order model for reacting flows [65.73506571113623]
A new hybrid predictive Reduced Order Model (ROM) is proposed to solve reacting flow problems.
The number of degrees of freedom is reduced from thousands of temporal points to a few POD modes with their corresponding temporal coefficients.
Two different deep learning architectures have been tested to predict the temporal coefficients.
arXiv Detail & Related papers (2023-01-24T08:39:20Z) - Attention-based Feature Compression for CNN Inference Offloading in Edge
Computing [93.67044879636093]
This paper studies the computational offloading of CNN inference in device-edge co-inference systems.
We propose a novel autoencoder-based CNN architecture (AECNN) for effective feature extraction at end-device.
Experiments show that AECNN can compress the intermediate data by more than 256x with only about 4% accuracy loss.
arXiv Detail & Related papers (2022-11-24T18:10:01Z) - Lost Vibration Test Data Recovery Using Convolutional Neural Network: A
Case Study [0.0]
This paper proposes a CNN algorithm for the Alamosa Canyon Bridge as a real structure.
Three different CNN models were considered to predict one and two malfunctioned sensors.
The accuracy of the model was increased by adding a convolutional layer.
arXiv Detail & Related papers (2022-04-11T23:24:03Z) - Performance and accuracy assessments of an incompressible fluid solver
coupled with a deep Convolutional Neural Network [0.0]
The resolution of the Poisson equation is usually one of the most computationally intensive steps for incompressible fluid solvers.
CNN has been introduced to solve this equation, leading to significant inference time reduction.
A hybrid strategy is developed, which couples a CNN with a traditional iterative solver to ensure a user-defined accuracy level.
arXiv Detail & Related papers (2021-09-20T08:30:29Z) - Learning from Images: Proactive Caching with Parallel Convolutional
Neural Networks [94.85780721466816]
A novel framework for proactive caching is proposed in this paper.
It combines model-based optimization with data-driven techniques by transforming an optimization problem into a grayscale image.
Numerical results show that the proposed scheme can reduce 71.6% computation time with only 0.8% additional performance cost.
arXiv Detail & Related papers (2021-08-15T21:32:47Z) - Deep learning for gravitational-wave data analysis: A resampling
white-box approach [62.997667081978825]
We apply Convolutional Neural Networks (CNNs) to detect gravitational wave (GW) signals of compact binary coalescences, using single-interferometer data from LIGO detectors.
CNNs were quite precise to detect noise but not sensitive enough to recall GW signals, meaning that CNNs are better for noise reduction than generation of GW triggers.
arXiv Detail & Related papers (2020-09-09T03:28:57Z) - Implicit Convex Regularizers of CNN Architectures: Convex Optimization
of Two- and Three-Layer Networks in Polynomial Time [70.15611146583068]
We study training of Convolutional Neural Networks (CNNs) with ReLU activations.
We introduce exact convex optimization with a complexity with respect to the number of data samples, the number of neurons, and data dimension.
arXiv Detail & Related papers (2020-06-26T04:47:20Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Inferring Convolutional Neural Networks' accuracies from their
architectural characterizations [0.0]
We study the relationships between a CNN's architecture and its performance.
We show that the attributes can be predictive of the networks' performance in two specific computer vision-based physics problems.
We use machine learning models to predict whether a network can perform better than a certain threshold accuracy before training.
arXiv Detail & Related papers (2020-01-07T16:41:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.