Multi-Time-Scale Input Approaches for Hourly-Scale Rainfall-Runoff
Modeling based on Recurrent Neural Networks
- URL: http://arxiv.org/abs/2103.10932v1
- Date: Sat, 30 Jan 2021 07:51:55 GMT
- Title: Multi-Time-Scale Input Approaches for Hourly-Scale Rainfall-Runoff
Modeling based on Recurrent Neural Networks
- Authors: Kei Ishida, Masato Kiyama, Ali Ercan, Motoki Amagasaki, Tongbi Tu,
Makoto Ueda
- Abstract summary: Two approaches are proposed to reduce the required computational time for time-series modeling through a recurrent neural network (RNN)
One approach provides coarse fine temporal resolutions of the input time-series to RNN in parallel.
The results confirm that both of the proposed approaches can reduce the computational time for the training of RNN significantly.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study proposes two straightforward yet effective approaches to reduce
the required computational time of the training process for time-series
modeling through a recurrent neural network (RNN) using multi-time-scale
time-series data as input. One approach provides coarse and fine temporal
resolutions of the input time-series to RNN in parallel. The other concatenates
the coarse and fine temporal resolutions of the input time-series data over
time before considering them as the input to RNN. In both approaches, first,
finer temporal resolution data are utilized to learn the fine temporal scale
behavior of the target data. Next, coarser temporal resolution data are
expected to capture long-duration dependencies between the input and target
variables. The proposed approaches were implemented for hourly rainfall-runoff
modeling at a snow-dominated watershed by employing a long and short-term
memory (LSTM) network, which is a newer type of RNN. Subsequently, the daily
and hourly meteorological data were utilized as the input, and hourly flow
discharge was considered as the target data. The results confirm that both of
the proposed approaches can reduce the computational time for the training of
RNN significantly (up to 32.4 times). Furthermore, one of the proposed
approaches improves the estimation accuracy.
Related papers
- Zero-Shot Temporal Resolution Domain Adaptation for Spiking Neural Networks [3.2366933261812076]
Spiking Neural Networks (SNNs) are biologically-inspired deep neural networks that efficiently extract temporal information.
SNN model parameters are sensitive to temporal resolution, leading to significant performance drops when the temporal resolution of target data at the edge is not the same.
We propose three novel domain adaptation methods for adapting neuron parameters to account for the change in time resolution without re-training on target time-resolution.
arXiv Detail & Related papers (2024-11-07T14:58:51Z) - PreRoutGNN for Timing Prediction with Order Preserving Partition: Global
Circuit Pre-training, Local Delay Learning and Attentional Cell Modeling [84.34811206119619]
We propose a two-stage approach to pre-routing timing prediction.
First, we propose global circuit training to pre-train a graph auto-encoder that learns the global graph embedding from circuit netlist.
Second, we use a novel node updating scheme for message passing on GCN, following the topological sorting sequence of the learned graph embedding and circuit graph.
Experiments on 21 real world circuits achieve a new SOTA R2 of 0.93 for slack prediction, which is significantly surpasses 0.59 by previous SOTA method.
arXiv Detail & Related papers (2024-02-27T02:23:07Z) - Neural Differential Recurrent Neural Network with Adaptive Time Steps [11.999568208578799]
We propose an RNN-based model, called RNN-ODE-Adap, that uses a neural ODE to represent the time development of the hidden states.
We adaptively select time steps based on the steepness of changes of the data over time so as to train the model more efficiently for the "spike-like" time series.
arXiv Detail & Related papers (2023-06-02T16:46:47Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Task-Synchronized Recurrent Neural Networks [0.0]
Recurrent Neural Networks (RNNs) traditionally involve ignoring the fact, feeding the time differences as additional inputs, or resampling the data.
We propose an elegant straightforward alternative approach where instead the RNN is in effect resampled in time to match the time of the data or the task at hand.
We confirm empirically that our models can effectively compensate for the time-non-uniformity of the data and demonstrate that they compare favorably to data resampling, classical RNN methods, and alternative RNN models.
arXiv Detail & Related papers (2022-04-11T15:27:40Z) - Recurrent Neural Networks for Learning Long-term Temporal Dependencies
with Reanalysis of Time Scale Representation [16.32068729107421]
We argue that the interpretation of a forget gate as a temporal representation is valid when the gradient of loss with respect to the state decreases exponentially as time goes back.
We propose an approach to construct new RNNs that can represent a longer time scale than conventional models.
arXiv Detail & Related papers (2021-11-05T06:22:58Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Online learning of windmill time series using Long Short-term Cognitive
Networks [58.675240242609064]
The amount of data generated on windmill farms makes online learning the most viable strategy to follow.
We use Long Short-term Cognitive Networks (LSTCNs) to forecast windmill time series in online settings.
Our approach reported the lowest forecasting errors with respect to a simple RNN, a Long Short-term Memory, a Gated Recurrent Unit, and a Hidden Markov Model.
arXiv Detail & Related papers (2021-07-01T13:13:24Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Accelerated solving of coupled, non-linear ODEs through LSTM-AI [0.0]
The present project aims to use machine learning, specifically neural networks (NN), to learn the trajectories of a set of coupled ordinary differential equations (ODEs)
We observed computational speed ups ranging from 9.75 to 197 times when comparing prediction compute time with compute time for obtaining the numeric solution.
arXiv Detail & Related papers (2020-09-11T18:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.