Discovering Mechanistic Models of Neural Activity: System Identification in an in Silico Zebrafish
- URL: http://arxiv.org/abs/2602.04492v1
- Date: Wed, 04 Feb 2026 12:33:29 GMT
- Title: Discovering Mechanistic Models of Neural Activity: System Identification in an in Silico Zebrafish
- Authors: Jan-Matthis Lueckmann, Viren Jain, MichaĆ Januszewski,
- Abstract summary: We test mechanistic models of neural circuits using simulations of a larval zebrafish as a ground truth.<n>We find that LLM-based tree search autonomously discovers predictive models that significantly outperform established forecasting baselines.<n>Our insights provide guidance for modeling real-world neural recordings and offer a broader template for AI-driven scientific discovery.
- Score: 0.8219355086755371
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Constructing mechanistic models of neural circuits is a fundamental goal of neuroscience, yet verifying such models is limited by the lack of ground truth. To rigorously test model discovery, we establish an in silico testbed using neuromechanical simulations of a larval zebrafish as a transparent ground truth. We find that LLM-based tree search autonomously discovers predictive models that significantly outperform established forecasting baselines. Conditioning on sensory drive is necessary but not sufficient for faithful system identification, as models exploit statistical shortcuts. Structural priors prove essential for enabling robust out-of-distribution generalization and recovery of interpretable mechanistic models. Our insights provide guidance for modeling real-world neural recordings and offer a broader template for AI-driven scientific discovery.
Related papers
- NIMMGen: Learning Neural-Integrated Mechanistic Digital Twins with LLMs [17.66806675891691]
We introduce the Neural-Integrated Mechanistic Modeling (NIMM) evaluation framework to evaluate mechanistic models.<n>Our evaluation reveals fundamental challenges in current baselines, ranging from model effectiveness to code-level correctness.<n>We design NIMMgen, an agentic framework for neural-integrated mechanistic modeling that enhances code correctness and practical validity through iterative refinement.
arXiv Detail & Related papers (2026-02-20T05:46:54Z) - Simulation as Supervision: Mechanistic Pretraining for Scientific Discovery [0.0]
We introduce Simulation-Grounded Neural Networks (SGNNs), a framework that uses mechanistic simulations as training data for neural networks.<n>SGNNs achieve state-of-the-art results across scientific disciplines and modeling tasks.<n>They enable back-to-simulation attribution, a new form of mechanistic interpretability.
arXiv Detail & Related papers (2025-07-11T19:18:42Z) - Exploring hyperelastic material model discovery for human brain cortex:
multivariate analysis vs. artificial neural network approaches [10.003764827561238]
This study aims to identify the most favorable material model for human brain tissue.
We apply artificial neural network and multiple regression methods to a generalization of widely accepted classic models.
arXiv Detail & Related papers (2023-10-16T18:49:59Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - NCTV: Neural Clamping Toolkit and Visualization for Neural Network
Calibration [66.22668336495175]
A lack of consideration for neural network calibration will not gain trust from humans.
We introduce the Neural Clamping Toolkit, the first open-source framework designed to help developers employ state-of-the-art model-agnostic calibrated models.
arXiv Detail & Related papers (2022-11-29T15:03:05Z) - Monotonic Neural Additive Models: Pursuing Regulated Machine Learning
Models for Credit Scoring [1.90365714903665]
We introduce a novel class of monotonic neural additive models, which meet regulatory requirements by simplifying neural network architecture and enforcing monotonicity.
Our new model is as accurate as black-box fully-connected neural networks, providing a highly accurate and regulated machine learning method.
arXiv Detail & Related papers (2022-09-21T02:14:09Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - A multi-stage machine learning model on diagnosis of esophageal
manometry [50.591267188664666]
The framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage.
This is the first artificial-intelligence-style model to automatically predict CC diagnosis of HRM study from raw multi-swallow data.
arXiv Detail & Related papers (2021-06-25T20:09:23Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Theory-guided hard constraint projection (HCP): a knowledge-based
data-driven scientific machine learning method [7.778724782015986]
This study proposes theory-guided hard constraint projection (HCP)
This model converts physical constraints, such as governing equations, into a form that is easy to handle through discretization.
The performance of the theory-guided HCP is verified by experiments based on the heterogeneous subsurface flow problem.
arXiv Detail & Related papers (2020-12-11T06:17:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.