NIMMGen: Learning Neural-Integrated Mechanistic Digital Twins with LLMs
- URL: http://arxiv.org/abs/2602.18008v1
- Date: Fri, 20 Feb 2026 05:46:54 GMT
- Title: NIMMGen: Learning Neural-Integrated Mechanistic Digital Twins with LLMs
- Authors: Zihan Guan, Rituparna Datta, Mengxuan Hu, Shunshun Liu, Aiying Zhang, Prasanna Balachandran, Sheng Li, Anil Vullikanti,
- Abstract summary: We introduce the Neural-Integrated Mechanistic Modeling (NIMM) evaluation framework to evaluate mechanistic models.<n>Our evaluation reveals fundamental challenges in current baselines, ranging from model effectiveness to code-level correctness.<n>We design NIMMgen, an agentic framework for neural-integrated mechanistic modeling that enhances code correctness and practical validity through iterative refinement.
- Score: 17.66806675891691
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mechanistic models encode scientific knowledge about dynamical systems and are widely used in downstream scientific and policy applications. Recent work has explored LLM-based agentic frameworks to automatically construct mechanistic models from data; however, existing problem settings substantially oversimplify real-world conditions, leaving it unclear whether LLM-generated mechanistic models are reliable in practice. To address this gap, we introduce the Neural-Integrated Mechanistic Modeling (NIMM) evaluation framework, which evaluates LLM-generated mechanistic models under realistic settings with partial observations and diversified task objectives. Our evaluation reveals fundamental challenges in current baselines, ranging from model effectiveness to code-level correctness. Motivated by these findings, we design NIMMgen, an agentic framework for neural-integrated mechanistic modeling that enhances code correctness and practical validity through iterative refinement. Experiments across three datasets from diversified scientific domains demonstrate its strong performance. We also show that the learned mechanistic models support counterfactual intervention simulation.
Related papers
- Discovering Mechanistic Models of Neural Activity: System Identification in an in Silico Zebrafish [0.8219355086755371]
We test mechanistic models of neural circuits using simulations of a larval zebrafish as a ground truth.<n>We find that LLM-based tree search autonomously discovers predictive models that significantly outperform established forecasting baselines.<n>Our insights provide guidance for modeling real-world neural recordings and offer a broader template for AI-driven scientific discovery.
arXiv Detail & Related papers (2026-02-04T12:33:29Z) - Automating modeling in mechanics: LLMs as designers of physics-constrained neural networks for constitutive modeling of materials [0.0]
Large language model (LLM)-based agentic frameworks increasingly adopt the paradigm of dynamically generating task-specific agents.<n>We suggest that not only agents but also specialized software modules for scientific and engineering tasks can be generated on demand.
arXiv Detail & Related papers (2025-12-01T14:42:22Z) - Simulation as Supervision: Mechanistic Pretraining for Scientific Discovery [0.0]
We introduce Simulation-Grounded Neural Networks (SGNNs), a framework that uses mechanistic simulations as training data for neural networks.<n>SGNNs achieve state-of-the-art results across scientific disciplines and modeling tasks.<n>They enable back-to-simulation attribution, a new form of mechanistic interpretability.
arXiv Detail & Related papers (2025-07-11T19:18:42Z) - Model Reprogramming Demystified: A Neural Tangent Kernel Perspective [49.42322600160337]
We present a comprehensive theoretical analysis of Model Reprogramming (MR) through the lens of the Neural Tangent Kernel (NTK) framework.<n>We demonstrate that the success of MR is governed by the eigenvalue spectrum of the NTK matrix on the target dataset.<n>Our contributions include a novel theoretical framework for MR, insights into the relationship between source and target models, and extensive experiments validating our findings.
arXiv Detail & Related papers (2025-05-31T16:15:04Z) - Enhancing Domain-Specific Encoder Models with LLM-Generated Data: How to Leverage Ontologies, and How to Do Without Them [9.952432291248954]
We investigate the use of LLM-generated data for continual pretraining of encoder models in domains with limited data.<n>We compile a benchmark specifically designed for assessing embedding model performance in invasion biology.<n>Our results demonstrate that this approach achieves a fully automated pipeline for enhancing domain-specific understanding of small encoder models.
arXiv Detail & Related papers (2025-03-27T21:51:24Z) - MAPS: Advancing Multi-Modal Reasoning in Expert-Level Physical Science [62.96434290874878]
Current Multi-Modal Large Language Models (MLLM) have shown strong capabilities in general visual reasoning tasks.<n>We develop a new framework, named Multi-Modal Scientific Reasoning with Physics Perception and Simulation (MAPS) based on an MLLM.<n>MAPS decomposes expert-level multi-modal reasoning task into physical diagram understanding via a Physical Perception Model (PPM) and reasoning with physical knowledge via a simulator.
arXiv Detail & Related papers (2025-01-18T13:54:00Z) - Meta-Learning for Physically-Constrained Neural System Identification [9.417562391585076]
We present a gradient-based meta-learning framework for rapid adaptation of neural state-space models (NSSMs) for black-box system identification.<n>We show that the meta-learned models result in improved downstream performance in model-based state estimation in indoor localization and energy systems.
arXiv Detail & Related papers (2025-01-10T18:46:28Z) - GausSim: Foreseeing Reality by Gaussian Simulator for Elastic Objects [55.02281855589641]
GausSim is a novel neural network-based simulator designed to capture the dynamic behaviors of real-world elastic objects represented through Gaussian kernels.<n>We leverage continuum mechanics and treat each kernel as a Center of Mass System (CMS) that represents continuous piece of matter.<n>In addition, GausSim incorporates explicit physics constraints, such as mass and momentum conservation, ensuring interpretable results and robust, physically plausible simulations.
arXiv Detail & Related papers (2024-12-23T18:58:17Z) - Data-Efficient Inference of Neural Fluid Fields via SciML Foundation Model [49.06911227670408]
We show that SciML foundation model can significantly improve the data efficiency of inferring real-world 3D fluid dynamics with improved generalization.<n>We equip neural fluid fields with a novel collaborative training approach that utilizes augmented views and fluid features extracted by our foundation model.
arXiv Detail & Related papers (2024-12-18T14:39:43Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.