Scientifically-Interpretable Reasoning Network (ScIReN): Uncovering the Black-Box of Nature
- URL: http://arxiv.org/abs/2506.14054v1
- Date: Mon, 16 Jun 2025 23:21:37 GMT
- Title: Scientifically-Interpretable Reasoning Network (ScIReN): Uncovering the Black-Box of Nature
- Authors: Joshua Fan, Haodi Xu, Feng Tao, Md Nasim, Marc Grimson, Yiqi Luo, Carla P. Gomes,
- Abstract summary: We propose a fully-transparent framework that combines interpretable neural and process-based reasoning.<n>An interpretable encoder predicts scientifically-meaningful latent parameters, which are then passed through a differentiable process-based decoder.<n>ScIReN outperforms black-box networks in predictive accuracy while providing substantial scientific interpretability.
- Score: 19.959751739424785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are a powerful tool for learning patterns from data. However, they do not respect known scientific laws, nor can they reveal novel scientific insights due to their black-box nature. In contrast, scientific reasoning distills biological or physical principles from observations and controlled experiments, and quantitatively interprets them with process-based models made of mathematical equations. Yet, process-based models rely on numerous free parameters that must be set in an ad-hoc manner, and thus often fit observations poorly in cross-scale predictions. While prior work has embedded process-based models in conventional neural networks, discovering interpretable relationships between parameters in process-based models and input features is still a grand challenge for scientific discovery. We thus propose Scientifically-Interpretable Reasoning Network (ScIReN), a fully-transparent framework that combines interpretable neural and process-based reasoning. An interpretable encoder predicts scientifically-meaningful latent parameters, which are then passed through a differentiable process-based decoder to predict labeled output variables. ScIReN also uses a novel hard-sigmoid constraint layer to restrict latent parameters to meaningful ranges defined by scientific prior knowledge, further enhancing its interpretability. While the embedded process-based model enforces established scientific knowledge, the encoder reveals new scientific mechanisms and relationships hidden in conventional black-box models. We apply ScIReN on two tasks: simulating the flow of organic carbon through soils, and modeling ecosystem respiration from plants. In both tasks, ScIReN outperforms black-box networks in predictive accuracy while providing substantial scientific interpretability -- it can infer latent scientific mechanisms and their relationships with input features.
Related papers
- Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - Simulation as Supervision: Mechanistic Pretraining for Scientific Discovery [0.0]
We introduce Simulation-Grounded Neural Networks (SGNNs), a framework that uses mechanistic simulations as training data for neural networks.<n>SGNNs achieve state-of-the-art results across scientific disciplines and modeling tasks.<n>They enable back-to-simulation attribution, a new form of mechanistic interpretability.
arXiv Detail & Related papers (2025-07-11T19:18:42Z) - NOBLE -- Neural Operator with Biologically-informed Latent Embeddings to Capture Experimental Variability in Biological Neuron Models [68.89389652724378]
NOBLE is a neural operator framework that learns a mapping from a continuous frequency-modulated embedding of interpretable neuron features to the somatic voltage response induced by current injection.<n>It predicts distributions of neural dynamics accounting for the intrinsic experimental variability.<n>NOBLE is the first scaled-up deep learning framework validated on real experimental data.
arXiv Detail & Related papers (2025-06-05T01:01:18Z) - Exploring Biological Neuronal Correlations with Quantum Generative Models [0.0]
We introduce a quantum generative model framework for generating synthetic data that captures the spatial and temporal correlations of biological neuronal activity.
Our model demonstrates the ability to achieve reliable outcomes with fewer trainable parameters compared to classical methods.
arXiv Detail & Related papers (2024-09-13T18:00:06Z) - LLM and Simulation as Bilevel Optimizers: A New Paradigm to Advance Physical Scientific Discovery [141.39722070734737]
We propose to enhance the knowledge-driven, abstract reasoning abilities of Large Language Models with the computational strength of simulations.
We introduce Scientific Generative Agent (SGA), a bilevel optimization framework.
We conduct experiments to demonstrate our framework's efficacy in law discovery and molecular design.
arXiv Detail & Related papers (2024-05-16T03:04:10Z) - A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - Contextualizing MLP-Mixers Spatiotemporally for Urban Data Forecast at Scale [54.15522908057831]
We propose an adapted version of the computationally-Mixer for STTD forecast at scale.
Our results surprisingly show that this simple-yeteffective solution can rival SOTA baselines when tested on several traffic benchmarks.
Our findings contribute to the exploration of simple-yet-effective models for real-world STTD forecasting.
arXiv Detail & Related papers (2023-07-04T05:19:19Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Convolutional Motif Kernel Networks [1.104960878651584]
We show that our model is able to robustly learn on small datasets and reaches state-of-the-art performance on relevant healthcare prediction tasks.
Our proposed method can be utilized on DNA and protein sequences.
arXiv Detail & Related papers (2021-11-03T15:06:09Z) - Theory-guided hard constraint projection (HCP): a knowledge-based
data-driven scientific machine learning method [7.778724782015986]
This study proposes theory-guided hard constraint projection (HCP)
This model converts physical constraints, such as governing equations, into a form that is easy to handle through discretization.
The performance of the theory-guided HCP is verified by experiments based on the heterogeneous subsurface flow problem.
arXiv Detail & Related papers (2020-12-11T06:17:43Z) - Identification of state functions by physically-guided neural networks
with physically-meaningful internal layers [0.0]
We use the concept of physically-constrained neural networks (PCNN) to predict the input-output relation in a physical system.
We show that this approach, besides getting physically-based predictions, accelerates the training process.
arXiv Detail & Related papers (2020-11-17T11:26:37Z) - Phase Detection with Neural Networks: Interpreting the Black Box [58.720142291102135]
Neural networks (NNs) usually hinder any insight into the reasoning behind their predictions.
We demonstrate how influence functions can unravel the black box of NN when trained to predict the phases of the one-dimensional extended spinless Fermi-Hubbard model at half-filling.
arXiv Detail & Related papers (2020-04-09T17:45:45Z) - Physics-Guided Machine Learning for Scientific Discovery: An Application
in Simulating Lake Temperature Profiles [8.689056739160593]
This paper proposes a physics-guided recurrent neural network model (PGRNN) that combines RNNs and physics-based models.
We show that a PGRNN can improve prediction accuracy over that of physics-based models, while generating outputs consistent with physical laws.
Although we present and evaluate this methodology in the context of modeling the dynamics of temperature in lakes, it is applicable more widely to a range of scientific and engineering disciplines.
arXiv Detail & Related papers (2020-01-28T15:44:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.