Generalizing the SINDy approach with nested neural networks
- URL: http://arxiv.org/abs/2404.15742v2
- Date: Sun, 26 Jan 2025 18:47:14 GMT
- Title: Generalizing the SINDy approach with nested neural networks
- Authors: Camilla Fiorini, Clément Flint, Louis Fostier, Emmanuel Franck, Reyhaneh Hashemi, Victor Michel-Dansac, Wassim Tenachi,
- Abstract summary: Nested SINDy builds on the SINDy framework by introducing additional layers before and after the core SINDy layer.<n>We demonstrate the ability of the Nested SINDy approach to accurately find symbolic expressions for simple systems, and sparse (false but accurate) analytical representations for more complex systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Symbolic Regression (SR) is a widely studied field of research that aims to infer symbolic expressions from data. A popular approach for SR is the Sparse Identification of Nonlinear Dynamical Systems (SINDy) framework, which uses sparse regression to identify governing equations from data. This study introduces an enhanced method, Nested SINDy, that aims to increase the expressivity of the SINDy approach thanks to a nested structure. Indeed, traditional symbolic regression and system identification methods often fail with complex systems that cannot be easily described analytically. Nested SINDy builds on the SINDy framework by introducing additional layers before and after the core SINDy layer. This allows the method to identify symbolic representations for a wider range of systems, including those with compositions and products of functions. We demonstrate the ability of the Nested SINDy approach to accurately find symbolic expressions for simple systems, such as basic trigonometric functions, and sparse (false but accurate) analytical representations for more complex systems. Our results highlight Nested SINDy's potential as a tool for symbolic regression, surpassing the traditional SINDy approach in terms of expressivity. However, we also note the challenges in the optimization process for Nested SINDy and suggest future research directions, including the designing of a more robust methodology for the optimization process. This study proves that Nested SINDy can effectively discover symbolic representations of dynamical systems from data, offering new opportunities for understanding complex systems through data-driven methods.
Related papers
- Sparse Interpretable Deep Learning with LIES Networks for Symbolic Regression [22.345828337550575]
Symbolic regression aims to discover closed-form mathematical expressions that accurately describe data.<n>Existing SR methods often rely on population-based search or autoregressive modeling.<n>We introduce LIES (Logarithm, Identity, Exponential, Sine), a fixed neural network architecture with interpretable primitive activations that are optimized to model symbolic expressions.
arXiv Detail & Related papers (2025-06-09T22:05:53Z) - Deep Symbolic Optimization: Reinforcement Learning for Symbolic Mathematics [43.622135148720886]
Deep Symbolic Optimization (DSO) is a novel computational framework that enables symbolic optimization for scientific discovery.<n>One notable example is equation discovery, which aims to automatically derive mathematical models expressed in symbolic form.<n>In this chapter, we provide a comprehensive overview of the DSO framework and illustrate its transformative potential for automating symbolic optimization in scientific discovery.
arXiv Detail & Related papers (2025-05-16T00:31:19Z) - Symbolic Neural Ordinary Differential Equations [11.69943926220929]
We propose a novel learning framework of symbolic continuous-depth neural networks, termed Symbolic Neural Ordinary Differential Equations (SNODEs)
Our framework can be further applied to a wide range of scientific problems, such as system bifurcation and control, reconstruction and forecasting, as well as the discovery of new equations.
arXiv Detail & Related papers (2025-03-11T05:38:22Z) - SyMANTIC: An Efficient Symbolic Regression Method for Interpretable and Parsimonious Model Discovery in Science and Beyond [3.4191590966148824]
We introduce SyMANTIC, a novel Symbolic Regression (SR) algorithm.
SyMANTIC efficiently identifies low-dimensional descriptors from a large set of candidates.
We show that SyMANTIC uncovers similar or more accurate models at a fraction of the cost of existing SR methods.
arXiv Detail & Related papers (2025-02-05T17:05:25Z) - Neuro-Symbolic Query Optimization in Knowledge Graphs [0.4915744683251151]
chapter delves into the emerging field of neuro-symbolic query optimization for knowledge graphs.
Recent advancements have introduced neural models, which capture non-linear aspects of query optimization.
We discuss the architecture of these hybrid systems, highlighting the interplay between neural and symbolic components.
arXiv Detail & Related papers (2024-11-21T16:31:27Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - EASRec: Elastic Architecture Search for Efficient Long-term Sequential
Recommender Systems [82.76483989905961]
Current Sequential Recommender Systems (SRSs) suffer from computational and resource inefficiencies.
We develop the Elastic Architecture Search for Efficient Long-term Sequential Recommender Systems (EASRec)
EASRec introduces data-aware gates that leverage historical information from input data batch to improve the performance of the recommendation network.
arXiv Detail & Related papers (2024-02-01T07:22:52Z) - GFN-SR: Symbolic Regression with Generative Flow Networks [0.9208007322096533]
In recent years, deep symbolic regression (DSR) has emerged as a popular method in the field.
We propose an alternative framework (GFN-SR) to approach SR with deep learning.
GFN-SR is capable of generating a diverse set of best-fitting expressions.
arXiv Detail & Related papers (2023-12-01T07:38:05Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - A Robust SINDy Approach by Combining Neural Networks and an Integral
Form [8.950469063443332]
We propose a robust method to discover governing equations from noisy and scarce data.
We use neural networks to learn an implicit representation based on measurement data.
We obtain the derivative information required for SINDy using an automatic differentiation tool.
arXiv Detail & Related papers (2023-09-13T10:50:04Z) - InVAErt networks: a data-driven framework for model synthesis and
identifiability analysis [0.0]
inVAErt is a framework for data-driven analysis and synthesis of physical systems.
It uses a deterministic decoder to represent the forward and inverse maps, a normalizing flow to capture the probabilistic distribution of system outputs, and a variational encoder to learn a compact latent representation for the lack of bijectivity between inputs and outputs.
arXiv Detail & Related papers (2023-07-24T07:58:18Z) - Benchmarking sparse system identification with low-dimensional chaos [1.5849413067450229]
We systematically benchmark sparse regression variants by utilizing the dysts standardized database of chaotic systems.
We demonstrate how this open-source tool can be used to quantitatively compare different methods of system identification.
In all cases, we used ensembling to improve the noise robustness of SINDy and provide statistical comparisons.
arXiv Detail & Related papers (2023-02-04T18:49:52Z) - Symbolic Visual Reinforcement Learning: A Scalable Framework with
Object-Level Abstraction and Differentiable Expression Search [63.3745291252038]
We propose DiffSES, a novel symbolic learning approach that discovers discrete symbolic policies.
By using object-level abstractions instead of raw pixel-level inputs, DiffSES is able to leverage the simplicity and scalability advantages of symbolic expressions.
Our experiments demonstrate that DiffSES is able to generate symbolic policies that are simpler and more scalable than state-of-the-art symbolic RL methods.
arXiv Detail & Related papers (2022-12-30T17:50:54Z) - Neural-Symbolic Recursive Machine for Systematic Generalization [113.22455566135757]
We introduce the Neural-Symbolic Recursive Machine (NSR), whose core is a Grounded Symbol System (GSS)
NSR integrates neural perception, syntactic parsing, and semantic reasoning.
We evaluate NSR's efficacy across four challenging benchmarks designed to probe systematic generalization capabilities.
arXiv Detail & Related papers (2022-10-04T13:27:38Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - A novel Deep Neural Network architecture for non-linear system
identification [78.69776924618505]
We present a novel Deep Neural Network (DNN) architecture for non-linear system identification.
Inspired by fading memory systems, we introduce inductive bias (on the architecture) and regularization (on the loss function)
This architecture allows for automatic complexity selection based solely on available data.
arXiv Detail & Related papers (2021-06-06T10:06:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.