Hierarchically Disentangled Recurrent Network for Factorizing System Dynamics of Multi-scale Systems
- URL: http://arxiv.org/abs/2407.20152v1
- Date: Mon, 29 Jul 2024 16:25:43 GMT
- Title: Hierarchically Disentangled Recurrent Network for Factorizing System Dynamics of Multi-scale Systems
- Authors: Rahul Ghosh, Zac McEachran, Arvind Renganathan, Kelly Lindsay, Somya Sharma, Michael Steinbach, John Nieber, Christopher Duffy, Vipin Kumar,
- Abstract summary: We present a knowledge-guided machine learning (KGML) framework for modeling multi-scale processes.
We study its performance in the context of streamflow forecasting in hydrology.
- Score: 4.634606500665259
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a knowledge-guided machine learning (KGML) framework for modeling multi-scale processes, and study its performance in the context of streamflow forecasting in hydrology. Specifically, we propose a novel hierarchical recurrent neural architecture that factorizes the system dynamics at multiple temporal scales and captures their interactions. This framework consists of an inverse and a forward model. The inverse model is used to empirically resolve the system's temporal modes from data (physical model simulations, observed data, or a combination of them from the past), and these states are then used in the forward model to predict streamflow. In a hydrological system, these modes can represent different processes, evolving at different temporal scales (e.g., slow: groundwater recharge and baseflow vs. fast: surface runoff due to extreme rainfall). A key advantage of our framework is that once trained, it can incorporate new observations into the model's context (internal state) without expensive optimization approaches (e.g., EnKF) that are traditionally used in physical sciences for data assimilation. Experiments with several river catchments from the NWS NCRFC region show the efficacy of this ML-based data assimilation framework compared to standard baselines, especially for basins that have a long history of observations. Even for basins that have a shorter observation history, we present two orthogonal strategies of training our FHNN framework: (a) using simulation data from imperfect simulations and (b) using observation data from multiple basins to build a global model. We show that both of these strategies (that can be used individually or together) are highly effective in mitigating the lack of training data. The improvement in forecast accuracy is particularly noteworthy for basins where local models perform poorly because of data sparsity.
Related papers
- Graph Neural Networks and Differential Equations: A hybrid approach for data assimilation of fluid flows [0.0]
This study presents a novel hybrid approach that combines Graph Neural Networks (GNNs) with Reynolds-Averaged Navier Stokes (RANS) equations.
The results demonstrate significant improvements in the accuracy of the reconstructed mean flow compared to purely data-driven models.
arXiv Detail & Related papers (2024-11-14T14:31:52Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Deep Generative Data Assimilation in Multimodal Setting [0.1052166918701117]
In this work, we propose SLAMS: Score-based Latent Assimilation in Multimodal Setting.
We assimilate in-situ weather station data and ex-situ satellite imagery to calibrate the vertical temperature profiles, globally.
Our work is the first to apply deep generative framework for multimodal data assimilation using real-world datasets.
arXiv Detail & Related papers (2024-04-10T00:25:09Z) - Modeling Spatio-temporal Dynamical Systems with Neural Discrete Learning
and Levels-of-Experts [33.335735613579914]
We address the issue of modeling and estimating changes in the state oftemporal- dynamical systems based on a sequence of observations like video frames.
This paper propose the universal expert module -- that is, optical flow estimation component, to capture the laws of general physical processes in a data-driven fashion.
We conduct extensive experiments and ablations to demonstrate that the proposed framework achieves large performance margins, compared with the existing SOTA baselines.
arXiv Detail & Related papers (2024-02-06T06:27:07Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Differentiable, learnable, regionalized process-based models with
physical outputs can approach state-of-the-art hydrologic prediction accuracy [1.181206257787103]
We show that differentiable, learnable, process-based models (called delta models here) can approach the performance level of LSTM for the intensively-observed variable (streamflow) with regionalized parameterization.
We use a simple hydrologic model HBV as the backbone and use embedded neural networks, which can only be trained in a differentiable programming framework.
arXiv Detail & Related papers (2022-03-28T15:06:53Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Transfer learning to improve streamflow forecasts in data sparse regions [0.0]
We study the methodology behind Transfer Learning (TL) through fine-tuning and parameter transferring for better generalization performance of streamflow prediction in data-sparse regions.
We propose a standard recurrent neural network in the form of Long Short-Term Memory (LSTM) to fit on a sufficiently large source domain dataset.
We present a methodology to implement transfer learning approaches for hydrologic applications by separating the spatial and temporal components of the model and training the model to generalize.
arXiv Detail & Related papers (2021-12-06T14:52:53Z) - Using Data Assimilation to Train a Hybrid Forecast System that Combines
Machine-Learning and Knowledge-Based Components [52.77024349608834]
We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data is noisy partial measurements.
We show that by using partial measurements of the state of the dynamical system, we can train a machine learning model to improve predictions made by an imperfect knowledge-based model.
arXiv Detail & Related papers (2021-02-15T19:56:48Z) - A Multi-Channel Neural Graphical Event Model with Negative Evidence [76.51278722190607]
Event datasets are sequences of events of various types occurring irregularly over the time-line.
We propose a non-parametric deep neural network approach in order to estimate the underlying intensity functions.
arXiv Detail & Related papers (2020-02-21T23:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.