Comparative Analysis on Snowmelt-Driven Streamflow Forecasting Using Machine Learning Techniques
- URL: http://arxiv.org/abs/2404.13327v2
- Date: Tue, 23 Apr 2024 05:32:11 GMT
- Title: Comparative Analysis on Snowmelt-Driven Streamflow Forecasting Using Machine Learning Techniques
- Authors: Ukesh Thapa, Bipun Man Pati, Samit Thapa, Dhiraj Pyakurel, Anup Shrestha,
- Abstract summary: We propose a state-of-the-art (SOTA) deep learning sequential model, leveraging the Temporal Convolutional Network (TCN)
To evaluate the performance of our proposed model, we conducted a comparative analysis with other popular models including Support Vector Regression (SVR), Long Short Term Memory (LSTM), and Transformer.
The average metrics revealed that TCN outperformed the other models, with an average MAE of 0.011, RMSE of 0.023, $R2$ of 0.991, KGE of 0.992, and NSE of 0.991.
- Score: 0.20971479389679332
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid advancement of machine learning techniques has led to their widespread application in various domains including water resources. However, snowmelt modeling remains an area that has not been extensively explored. In this study, we propose a state-of-the-art (SOTA) deep learning sequential model, leveraging the Temporal Convolutional Network (TCN), for snowmelt-driven discharge modeling in the Himalayan basin of the Hindu Kush Himalayan Region. To evaluate the performance of our proposed model, we conducted a comparative analysis with other popular models including Support Vector Regression (SVR), Long Short Term Memory (LSTM), and Transformer. Furthermore, Nested cross-validation (CV) is used with five outer folds and three inner folds, and hyper-parameter tuning is performed on the inner folds. To evaluate the performance of the model mean absolute error (MAE), root mean square error (RMSE), R square ($R^{2}$), Kling-Gupta Efficiency (KGE), and Nash-Sutcliffe Efficiency (NSE) are computed for each outer fold. The average metrics revealed that TCN outperformed the other models, with an average MAE of 0.011, RMSE of 0.023, $R^{2}$ of 0.991, KGE of 0.992, and NSE of 0.991. The findings of this study demonstrate the effectiveness of the deep learning model as compared to traditional machine learning approaches for snowmelt-driven streamflow forecasting. Moreover, the superior performance of TCN highlights its potential as a promising deep learning model for similar hydrological applications.
Related papers
- Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [63.31328039424469]
This tutorial provides a comprehensive survey of methods for fine-tuning diffusion models to optimize downstream reward functions.
We explain the application of various RL algorithms, including PPO, differentiable optimization, reward-weighted MLE, value-weighted sampling, and path consistency learning.
arXiv Detail & Related papers (2024-07-18T17:35:32Z) - Towards Generalized Hydrological Forecasting using Transformer Models for 120-Hour Streamflow Prediction [0.34530027457862006]
This study explores the efficacy of a Transformer model for 120-hour streamflow prediction across 125 diverse locations in Iowa, US.
We benchmarked the Transformer model's performance against three deep learning models (LSTM, GRU, and Seq2Seq) and the Persistence approach.
The study reveals the Transformer model's superior performance, maintaining higher median NSE and KGE scores and exhibiting the lowest NRMSE values.
arXiv Detail & Related papers (2024-06-11T17:26:14Z) - The Languini Kitchen: Enabling Language Modelling Research at Different
Scales of Compute [66.84421705029624]
We introduce an experimental protocol that enables model comparisons based on equivalent compute, measured in accelerator hours.
We pre-process an existing large, diverse, and high-quality dataset of books that surpasses existing academic benchmarks in quality, diversity, and document length.
This work also provides two baseline models: a feed-forward model derived from the GPT-2 architecture and a recurrent model in the form of a novel LSTM with ten-fold throughput.
arXiv Detail & Related papers (2023-09-20T10:31:17Z) - Learning Large-scale Subsurface Simulations with a Hybrid Graph Network
Simulator [57.57321628587564]
We introduce Hybrid Graph Network Simulator (HGNS) for learning reservoir simulations of 3D subsurface fluid flows.
HGNS consists of a subsurface graph neural network (SGNN) to model the evolution of fluid flows, and a 3D-U-Net to model the evolution of pressure.
Using an industry-standard subsurface flow dataset (SPE-10) with 1.1 million cells, we demonstrate that HGNS is able to reduce the inference time up to 18 times compared to standard subsurface simulators.
arXiv Detail & Related papers (2022-06-15T17:29:57Z) - Machine Learning Emulation of Urban Land Surface Processes [0.0]
We develop an urban neural network (UNN) trained on the mean predicted flux from 22 urban land surface models (ULSMs) at one site.
When compared to a reference ULSM (Town Energy Balance; TEB), the UNN has greater accuracy relative to flux observations, less computational cost, and requires fewer parameters.
Although the application is currently constrained by the training data (1 site), we show a novel approach to improve the modeling of surface flux by combining the strengths of several ULSMs into one using ML.
arXiv Detail & Related papers (2021-12-21T18:47:46Z) - Rainfall-runoff prediction using a Gustafson-Kessel clustering based
Takagi-Sugeno Fuzzy model [0.0]
A rainfall-runoff model predicts surface runoff either using a physically-based approach or using a systems-based approach.
We propose a new rainfall-runoff model developed using Gustafson-Kessel clustering-based TS Fuzzy model.
arXiv Detail & Related papers (2021-08-22T10:02:51Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Convolutional conditional neural processes for local climate downscaling [31.887343372542805]
A new model is presented for multisite statistical downscaling of temperature and precipitation using convolutional conditional neural processes (convCNPs)
The convCNP model is shown to outperform an ensemble of existing downscaling techniques over Europe for both temperature and precipitation.
substantial improvement is seen in the representation of extreme precipitation events.
arXiv Detail & Related papers (2021-01-20T03:45:21Z) - DAIS: Automatic Channel Pruning via Differentiable Annealing Indicator
Search [55.164053971213576]
convolutional neural network has achieved great success in fulfilling computer vision tasks despite large computation overhead.
Structured (channel) pruning is usually applied to reduce the model redundancy while preserving the network structure.
Existing structured pruning methods require hand-crafted rules which may lead to tremendous pruning space.
arXiv Detail & Related papers (2020-11-04T07:43:01Z) - High Temporal Resolution Rainfall Runoff Modelling Using
Long-Short-Term-Memory (LSTM) Networks [0.03694429692322631]
The model was tested for a watershed in Houston, TX, known for severe flood events.
The LSTM network's capability in learning long-term dependencies between the input and output of the network allowed modeling RR with high resolution in time.
arXiv Detail & Related papers (2020-02-07T00:38:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.