Unveiling the Actual Performance of Neural-based Models for Equation Discovery on Graph Dynamical Systems
- URL: http://arxiv.org/abs/2508.18173v1
- Date: Mon, 25 Aug 2025 16:25:50 GMT
- Title: Unveiling the Actual Performance of Neural-based Models for Equation Discovery on Graph Dynamical Systems
- Authors: Riccardo Cappi, Paolo Frazzetto, Nicolò Navarin, Alessandro Sperduti,
- Abstract summary: Kolmogorov-Arnold Networks (KANs) for graphs are designed to exploit their inherent interpretability.<n>KANs successfully identify the underlying symbolic equations, significantly surpassing existing baselines.<n>This study offers a practical guide for researchers, clarifying the trade-offs between model expressivity and interpretability.
- Score: 45.11208589443806
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The ``black-box'' nature of deep learning models presents a significant barrier to their adoption for scientific discovery, where interpretability is paramount. This challenge is especially pronounced in discovering the governing equations of dynamical processes on networks or graphs, since even their topological structure further affects the processes' behavior. This paper provides a rigorous, comparative assessment of state-of-the-art symbolic regression techniques for this task. We evaluate established methods, including sparse regression and MLP-based architectures, and introduce a novel adaptation of Kolmogorov-Arnold Networks (KANs) for graphs, designed to exploit their inherent interpretability. Across a suite of synthetic and real-world dynamical systems, our results demonstrate that both MLP and KAN-based architectures can successfully identify the underlying symbolic equations, significantly surpassing existing baselines. Critically, we show that KANs achieve this performance with greater parsimony and transparency, as their learnable activation functions provide a clearer mapping to the true physical dynamics. This study offers a practical guide for researchers, clarifying the trade-offs between model expressivity and interpretability, and establishes the viability of neural-based architectures for robust scientific discovery on complex systems.
Related papers
- Combining feature-based approaches with graph neural networks and symbolic regression for synergistic performance and interpretability [0.0]
MatterVial is an innovative hybrid framework for feature-based machine learning in materials science.<n>Our approach combines the chemical transparency of traditional feature-based models with the predictive power of deep learning architectures.<n>An integrated interpretability module, employing surrogate models and symbolic regression, decodes the latent GNN-derived descriptors into explicit, physically meaningful formulas.
arXiv Detail & Related papers (2025-09-02T16:45:02Z) - Scientific Machine Learning with Kolmogorov-Arnold Networks [0.0]
The field of scientific machine learning is increasingly adopting Kolmogorov-Arnold Networks (KANs) for data encoding.<n>KANs address issues with enhanced interpretability and flexibility, enabling more efficient modeling of complex nonlinear interactions.<n>This review categorizes recent progress in KAN-based models across three perspectives: (i) data-driven learning, (ii) physics-informed modeling, and (iii) deep-operator learning.
arXiv Detail & Related papers (2025-07-30T01:26:44Z) - Structured Kolmogorov-Arnold Neural ODEs for Interpretable Learning and Symbolic Discovery of Nonlinear Dynamics [3.9000699798128338]
We propose a novel framework that integrates structured state-space modeling with the Kolmogorov-Arnold Network (KAN)<n>SKANODE first employs a fully trainable KAN as a universal function approximator within a structured Neural ODE framework to perform virtual sensing.<n>We exploit the symbolic regression capability of KAN to extract compact and interpretable expressions for the system's governing dynamics.
arXiv Detail & Related papers (2025-06-23T06:42:43Z) - Generalized Factor Neural Network Model for High-dimensional Regression [50.554377879576066]
We tackle the challenges of modeling high-dimensional data sets with latent low-dimensional structures hidden within complex, non-linear, and noisy relationships.<n>Our approach enables a seamless integration of concepts from non-parametric regression, factor models, and neural networks for high-dimensional regression.
arXiv Detail & Related papers (2025-02-16T23:13:55Z) - Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - SINDyG: Sparse Identification of Nonlinear Dynamical Systems from Graph-Structured Data, with Applications to Stuart-Landau Oscillator Networks [0.27624021966289597]
We develop a new method called Sparse Identification of Dynamical Systems from Graph-structured data (SINDyG)<n>SINDyG incorporates the network structure into sparse regression to identify model parameters that explain the underlying network dynamics.<n>Our experiments validate the improved accuracy and simplicity of discovered network dynamics.
arXiv Detail & Related papers (2024-09-02T17:51:37Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - A Novel Neural-symbolic System under Statistical Relational Learning [47.30190559449236]
We propose a neural-symbolic framework based on statistical relational learning, referred to as NSF-SRL.<n>Results of symbolic reasoning are utilized to refine and correct the predictions made by deep learning models, while deep learning models enhance the efficiency of the symbolic reasoning process.<n>We believe that this approach sets a new standard for neural-symbolic systems and will drive future research in the field of general artificial intelligence.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.