Update hydrological states or meteorological forcings? Comparing data assimilation methods for differentiable hydrologic models
- URL: http://arxiv.org/abs/2502.16444v1
- Date: Sun, 23 Feb 2025 05:08:05 GMT
- Title: Update hydrological states or meteorological forcings? Comparing data assimilation methods for differentiable hydrologic models
- Authors: Amirmoez Jamaat, Yalan Song, Farshid Rahmani, Jiangtao Liu, Kathryn Lawson, Chaopeng Shen,
- Abstract summary: Data assimilation (DA) enables hydrologic models to update their internal states using near-real-time observations for more accurate forecasts.<n>We developed variational DA methods for differentiable models, including optimizing adjusters for just precipitation data.<n>Our DA framework does not need systematic training data and could serve as a practical DA scheme for whole river networks.
- Score: 0.923607423080658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data assimilation (DA) enables hydrologic models to update their internal states using near-real-time observations for more accurate forecasts. With deep neural networks like long short-term memory (LSTM), using either lagged observations as inputs (called "data integration") or variational DA has shown success in improving forecasts. However, it is unclear which methods are performant or optimal for physics-informed machine learning ("differentiable") models, which represent only a small amount of physically-meaningful states while using deep networks to supply parameters or missing processes. Here we developed variational DA methods for differentiable models, including optimizing adjusters for just precipitation data, just model internal hydrological states, or both. Our results demonstrated that differentiable streamflow models using the CAMELS dataset can benefit strongly and equivalently from variational DA as LSTM, with one-day lead time median Nash-Sutcliffe efficiency (NSE) elevated from 0.75 to 0.82. The resulting forecast matched or outperformed LSTM with DA in the eastern, northwestern, and central Great Plains regions of the conterminous United States. Both precipitation and state adjusters were needed to achieve these results, with the latter being substantially more effective on its own, and the former adding moderate benefits for high flows. Our DA framework does not need systematic training data and could serve as a practical DA scheme for whole river networks.
Related papers
- Exploring the Use of Machine Learning Weather Models in Data Assimilation [0.0]
GraphCast and NeuralGCM are two promising ML-based weather models, but their suitability for data assimilation remains under-explored.
We compare the TL/AD results of GraphCast and NeuralGCM with those of the Model for Prediction Across Scales - Atmosphere (MPAS-A), a well-established numerical weather prediction (NWP) model.
While the adjoint results of both GraphCast and NeuralGCM show some similarity to those of MPAS-A, they also exhibit unphysical noise at various vertical levels, raising concerns about their robustness for operational DA systems.
arXiv Detail & Related papers (2024-11-22T02:18:28Z) - On conditional diffusion models for PDE simulations [53.01911265639582]
We study score-based diffusion models for forecasting and assimilation of sparse observations.
We propose an autoregressive sampling approach that significantly improves performance in forecasting.
We also propose a new training strategy for conditional score-based models that achieves stable performance over a range of history lengths.
arXiv Detail & Related papers (2024-10-21T18:31:04Z) - Hierarchically Disentangled Recurrent Network for Factorizing System Dynamics of Multi-scale Systems [4.634606500665259]
We present a knowledge-guided machine learning (KGML) framework for modeling multi-scale processes.
We study its performance in the context of streamflow forecasting in hydrology.
arXiv Detail & Related papers (2024-07-29T16:25:43Z) - Semi-Supervised Model-Free Bayesian State Estimation from Compressed Measurements [57.04370580292727]
We consider data-driven Bayesian state estimation from compressed measurements.
The dimension of the temporal measurement vector is lower than that of the temporal state vector to be estimated.
The underlying dynamical model of the state's evolution is unknown for a'model-free process'
arXiv Detail & Related papers (2024-07-10T05:03:48Z) - Towards Generalized Hydrological Forecasting using Transformer Models for 120-Hour Streamflow Prediction [0.34530027457862006]
This study explores the efficacy of a Transformer model for 120-hour streamflow prediction across 125 diverse locations in Iowa, US.
We benchmarked the Transformer model's performance against three deep learning models (LSTM, GRU, and Seq2Seq) and the Persistence approach.
The study reveals the Transformer model's superior performance, maintaining higher median NSE and KGE scores and exhibiting the lowest NRMSE values.
arXiv Detail & Related papers (2024-06-11T17:26:14Z) - Everything to the Synthetic: Diffusion-driven Test-time Adaptation via Synthetic-Domain Alignment [81.78901060731269]
Test-time adaptation (TTA) aims to improve the performance of source-domain pre-trained models on previously unseen, shifted target domains.
Traditional TTA methods primarily adapt model weights based on target data streams, making model performance sensitive to the amount and order of target data.
The recently proposed diffusion-driven TTA methods mitigate this by adapting model inputs instead of weights, where an unconditional diffusion model, trained on the source domain, transforms target-domain data into a synthetic domain that is expected to approximate the source domain.
arXiv Detail & Related papers (2024-06-06T17:39:09Z) - Toward Routing River Water in Land Surface Models with Recurrent Neural Networks [0.0]
We study the performance of recurrent neural networks (RNNs) for river routing in land surface models (LSMs)<n>Instead of observed precipitation, the LSM-RNN uses instantaneous runoff calculated from physics-based models as an input.<n>We train the model with data from river basins spanning the globe and test it using historical streamflow measurements.
arXiv Detail & Related papers (2024-04-22T14:21:37Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Fast-Slow Test-Time Adaptation for Online Vision-and-Language Navigation [67.18144414660681]
We propose a Fast-Slow Test-Time Adaptation (FSTTA) approach for online Vision-and-Language Navigation (VLN)
Our method obtains impressive performance gains on four popular benchmarks.
arXiv Detail & Related papers (2023-11-22T07:47:39Z) - Differentiable, learnable, regionalized process-based models with
physical outputs can approach state-of-the-art hydrologic prediction accuracy [1.181206257787103]
We show that differentiable, learnable, process-based models (called delta models here) can approach the performance level of LSTM for the intensively-observed variable (streamflow) with regionalized parameterization.
We use a simple hydrologic model HBV as the backbone and use embedded neural networks, which can only be trained in a differentiable programming framework.
arXiv Detail & Related papers (2022-03-28T15:06:53Z) - Hyperparameter-free Continuous Learning for Domain Classification in
Natural Language Understanding [60.226644697970116]
Domain classification is the fundamental task in natural language understanding (NLU)
Most existing continual learning approaches suffer from low accuracy and performance fluctuation.
We propose a hyper parameter-free continual learning model for text data that can stably produce high performance under various environments.
arXiv Detail & Related papers (2022-01-05T02:46:16Z) - Learnable Bernoulli Dropout for Bayesian Deep Learning [53.79615543862426]
Learnable Bernoulli dropout (LBD) is a new model-agnostic dropout scheme that considers the dropout rates as parameters jointly optimized with other model parameters.
LBD leads to improved accuracy and uncertainty estimates in image classification and semantic segmentation.
arXiv Detail & Related papers (2020-02-12T18:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.