Using Machine Learning to Discover Parsimonious and Physically-Interpretable Representations of Catchment-Scale Rainfall-Runoff Dynamics
- URL: http://arxiv.org/abs/2412.04845v3
- Date: Fri, 07 Feb 2025 20:54:48 GMT
- Title: Using Machine Learning to Discover Parsimonious and Physically-Interpretable Representations of Catchment-Scale Rainfall-Runoff Dynamics
- Authors: Yuan-Heng Wang, Hoshin V. Gupta,
- Abstract summary: An underexplored aspect of machine learning is how to develop minimally-optimal representations.
Our own view is that ML-based modeling should be based in use of computational units that are fundamentally interpretable by design.
We show, in the context of lumped catchment modeling, that physical interpretability and predictive performance can both be achieved using a relatively parsimonious distributed-state network.
- Score: 1.1510009152620668
- License:
- Abstract: Despite excellent real-world predictive performance of modern machine learning (ML) methods, many scientists hesitate to discard traditional physical-conceptual (PC) approaches due to their relative interpretability, which contributes to credibility during decision-making. In this context, a currently underexplored aspect of ML is how to develop minimally-optimal representations that can facilitate better insight regarding system functioning. Regardless of how this is achieved, parsimonious representations seem to better support the advancement of scientific understanding. Our own view is that ML-based modeling should be based in use of computational units that are fundamentally interpretable by design. This paper continues our exploration of how ML can be exploited in the service of scientific investigation. We use the Mass-Conserving-Perceptron (MCP) as the fundamental computational unit in a generic network architecture to explore important issues related to the use of observational data for constructing models of dynamical systems. We show, in the context of lumped catchment modeling, that physical interpretability and predictive performance can both be achieved using a relatively parsimonious distributed-state multiple-flow-path network with context-dependent gating and information sharing across the nodes, suggesting that MCP-based modeling can play a significant role in application of ML to geoscientific investigation.
Related papers
- MAPS: Advancing Multi-Modal Reasoning in Expert-Level Physical Science [62.96434290874878]
Current Multi-Modal Large Language Models (MLLM) have shown strong capabilities in general visual reasoning tasks.
We develop a new framework, named Multi-Modal Scientific Reasoning with Physics Perception and Simulation (MAPS) based on an MLLM.
MAPS decomposes expert-level multi-modal reasoning task into physical diagram understanding via a Physical Perception Model (PPM) and reasoning with physical knowledge via a simulator.
arXiv Detail & Related papers (2025-01-18T13:54:00Z) - Data-Efficient Inference of Neural Fluid Fields via SciML Foundation Model [49.06911227670408]
We show that SciML foundation model can significantly improve the data efficiency of inferring real-world 3D fluid dynamics with improved generalization.
We equip neural fluid fields with a novel collaborative training approach that utilizes augmented views and fluid features extracted by our foundation model.
arXiv Detail & Related papers (2024-12-18T14:39:43Z) - Benchmarks as Microscopes: A Call for Model Metrology [76.64402390208576]
Modern language models (LMs) pose a new challenge in capability assessment.
To be confident in our metrics, we need a new discipline of model metrology.
arXiv Detail & Related papers (2024-07-22T17:52:12Z) - Large Language Model-Based Interpretable Machine Learning Control in Building Energy Systems [3.0309252269809264]
This paper investigates and explores Interpretable Machine Learning (IML), a branch of Machine Learning (ML) that enhances transparency and understanding of models and their inferences.
We develop an innovative framework that combines the principles of Shapley values and the in-context learning feature of Large Language Models (LLMs)
The paper presents a case study to demonstrate the feasibility of the developed IML framework for model predictive control-based precooling under demand response events in a virtual testbed.
arXiv Detail & Related papers (2024-02-14T21:19:33Z) - Towards Interpretable Physical-Conceptual Catchment-Scale Hydrological Modeling using the Mass-Conserving-Perceptron [1.1510009152620668]
This study sets the stage for interpretable regional-scale MCP-based hydrological modeling (using large sample data) by using neural architecture search to determine appropriate minimal representations for catchments in different hydroclimatic regimes.
arXiv Detail & Related papers (2024-01-25T21:26:49Z) - A Mass-Conserving-Perceptron for Machine Learning-Based Modeling of Geoscientific Systems [1.1510009152620668]
We propose a physically-interpretable Mass Conserving Perceptron (MCP) as a way to bridge the gap between PC-based and ML-based modeling approaches.
The MCP exploits the inherent isomorphism between the directed graph structures underlying both PC models and GRNNs to explicitly represent the mass-conserving nature of physical processes.
arXiv Detail & Related papers (2023-10-12T18:09:33Z) - Scaling Vision-Language Models with Sparse Mixture of Experts [128.0882767889029]
We show that mixture-of-experts (MoE) techniques can achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost.
Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling vision-language models.
arXiv Detail & Related papers (2023-03-13T16:00:31Z) - Scientific Inference With Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena [4.312340306206884]
Interpretable machine learning offers a solution by analyzing models holistically to derive interpretations.
Current IML research is focused on auditing ML models rather than leveraging them for scientific inference.
We present a framework for designing IML methods-termed 'property descriptors' that illuminate not just the model, but also the phenomenon it represents.
arXiv Detail & Related papers (2022-06-11T10:13:21Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Towards Interpretable Deep Learning Models for Knowledge Tracing [62.75876617721375]
We propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models.
Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model.
Experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions.
arXiv Detail & Related papers (2020-05-13T04:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.