SWAT Watershed Model Calibration using Deep Learning
- URL: http://arxiv.org/abs/2110.03097v1
- Date: Wed, 6 Oct 2021 22:56:23 GMT
- Title: SWAT Watershed Model Calibration using Deep Learning
- Authors: M. K. Mudunuru, K. Son, P. Jiang, X. Chen
- Abstract summary: We present a fast, accurate, and reliable methodology to calibrate the SWAT model using deep learning (DL)
We develop DL-enabled inverse models based on convolutional neural networks to ingest streamflow data and estimate the SWAT model parameters.
Our results show that the DL models-based calibration is better than traditional parameter estimation methods.
- Score: 0.860255319568951
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Watershed models such as the Soil and Water Assessment Tool (SWAT) consist of
high-dimensional physical and empirical parameters. These parameters need to be
accurately calibrated for models to produce reliable predictions for
streamflow, evapotranspiration, snow water equivalent, and nutrient loading.
Existing parameter estimation methods are time-consuming, inefficient, and
computationally intensive, with reduced accuracy when estimating
high-dimensional parameters. In this paper, we present a fast, accurate, and
reliable methodology to calibrate the SWAT model (i.e., 21 parameters) using
deep learning (DL). We develop DL-enabled inverse models based on convolutional
neural networks to ingest streamflow data and estimate the SWAT model
parameters. Hyperparameter tuning is performed to identify the optimal neural
network architecture and the nine next best candidates. We use ensemble SWAT
simulations to train, validate, and test the above DL models. We estimated the
actual parameters of the SWAT model using observational data. We test and
validate the proposed DL methodology on the American River Watershed, located
in the Pacific Northwest-based Yakima River basin. Our results show that the DL
models-based calibration is better than traditional parameter estimation
methods, such as generalized likelihood uncertainty estimation (GLUE). The
behavioral parameter sets estimated by DL have narrower ranges than GLUE and
produce values within the sampling range even under high relative observational
errors. This narrow range of parameters shows the reliability of the proposed
workflow to estimate sensitive parameters accurately even under noise. Due to
its fast and reasonably accurate estimations of process parameters, the
proposed DL workflow is attractive for calibrating integrated hydrologic models
for large spatial-scale applications.
Related papers
- Evaluating Deep Learning Approaches for Predictions in Unmonitored Basins with Continental-scale Stream Temperature Models [1.8067095934521364]
Recent machine learning (ML) models can harness vast datasets for accurate predictions at large spatial scales.
This study explores questions regarding model design and data needed for inputs and training to improve performance.
arXiv Detail & Related papers (2024-10-23T15:36:59Z) - LoRTA: Low Rank Tensor Adaptation of Large Language Models [70.32218116940393]
Low Rank Adaptation (LoRA) is a popular Efficient Fine Tuning (PEFT) method that effectively adapts large pre-trained models for downstream tasks.
We propose a novel approach that employs a low rank tensor parametrization for model updates.
Our method is both efficient and effective for fine-tuning large language models, achieving a substantial reduction in the number of parameters while maintaining comparable performance.
arXiv Detail & Related papers (2024-10-05T06:59:50Z) - SaRA: High-Efficient Diffusion Model Fine-tuning with Progressive Sparse Low-Rank Adaptation [52.6922833948127]
In this work, we investigate the importance of parameters in pre-trained diffusion models.
We propose a novel model fine-tuning method to make full use of these ineffective parameters.
Our method enhances the generative capabilities of pre-trained models in downstream applications.
arXiv Detail & Related papers (2024-09-10T16:44:47Z) - Straightforward Layer-wise Pruning for More Efficient Visual Adaptation [0.0]
We propose a Straightforward layer-wise pruning method, called SLS, for pruning PETL-transferred models.
Our study reveals that layer-wise pruning, with a focus on storing pruning indices, addresses storage volume concerns.
arXiv Detail & Related papers (2024-07-19T14:10:35Z) - Scaling & Shifting Your Features: A New Baseline for Efficient Model
Tuning [126.84770886628833]
Existing finetuning methods either tune all parameters of the pretrained model (full finetuning) or only tune the last linear layer (linear probing)
We propose a new parameter-efficient finetuning method termed as SSF, representing that researchers only need to Scale and Shift the deep Features extracted by a pre-trained model to catch up with the performance full finetuning.
arXiv Detail & Related papers (2022-10-17T08:14:49Z) - Approximate Bayesian Computation for Physical Inverse Modeling [0.32771631221674324]
We propose a new method for automating the model parameter extraction process resulting in an accurate model fitting.
It is shown that the extracted parameters can be accurately predicted from the mobility curves using gradient boosted trees.
This work also provides a comparative analysis of the proposed framework with fine-tuned neural networks wherein the proposed framework is shown to perform better.
arXiv Detail & Related papers (2021-11-26T02:23:05Z) - Combining data assimilation and machine learning to estimate parameters
of a convective-scale model [0.0]
Errors in the representation of clouds in convection-permitting numerical weather prediction models can be introduced by different sources.
In this work, we look at the problem of parameter estimation through an artificial intelligence lens by training two types of artificial neural networks.
arXiv Detail & Related papers (2021-09-07T09:17:29Z) - Artificial Intelligence Hybrid Deep Learning Model for Groundwater Level
Prediction Using MLP-ADAM [0.0]
In this paper, a multi-layer perceptron is applied to simulate groundwater level.
The adaptive moment estimation algorithm is also used to this matter.
Results indicate that deep learning algorithms can demonstrate a high accuracy prediction.
arXiv Detail & Related papers (2021-07-29T10:11:45Z) - Physics-constrained deep neural network method for estimating parameters
in a redox flow battery [68.8204255655161]
We present a physics-constrained deep neural network (PCDNN) method for parameter estimation in the zero-dimensional (0D) model of the vanadium flow battery (VRFB)
We show that the PCDNN method can estimate model parameters for a range of operating conditions and improve the 0D model prediction of voltage.
We also demonstrate that the PCDNN approach has an improved generalization ability for estimating parameter values for operating conditions not used in the training.
arXiv Detail & Related papers (2021-06-21T23:42:58Z) - Hybrid Physics and Deep Learning Model for Interpretable Vehicle State
Prediction [75.1213178617367]
We propose a hybrid approach combining deep learning and physical motion models.
We achieve interpretability by restricting the output range of the deep neural network as part of the hybrid model.
The results show that our hybrid model can improve model interpretability with no decrease in accuracy compared to existing deep learning approaches.
arXiv Detail & Related papers (2021-03-11T15:21:08Z) - Learnable Bernoulli Dropout for Bayesian Deep Learning [53.79615543862426]
Learnable Bernoulli dropout (LBD) is a new model-agnostic dropout scheme that considers the dropout rates as parameters jointly optimized with other model parameters.
LBD leads to improved accuracy and uncertainty estimates in image classification and semantic segmentation.
arXiv Detail & Related papers (2020-02-12T18:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.