GINN-LP: A Growing Interpretable Neural Network for Discovering
Multivariate Laurent Polynomial Equations
- URL: http://arxiv.org/abs/2312.10913v2
- Date: Thu, 15 Feb 2024 03:56:45 GMT
- Title: GINN-LP: A Growing Interpretable Neural Network for Discovering
Multivariate Laurent Polynomial Equations
- Authors: Nisal Ranasinghe, Damith Senanayake, Sachith Seneviratne, Malin
Premaratne, Saman Halgamuge
- Abstract summary: We propose GINN-LP, an interpretable neural network, to discover the form of a Laurent Polynomial equation.
To the best of our knowledge, this is the first neural network that can discover arbitrary terms without any prior information on the order.
We show that GINN-LP outperforms the state-of-theart symbolic regression methods on datasets.
- Score: 1.1142444517901018
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Traditional machine learning is generally treated as a black-box optimization
problem and does not typically produce interpretable functions that connect
inputs and outputs. However, the ability to discover such interpretable
functions is desirable. In this work, we propose GINN-LP, an interpretable
neural network to discover the form and coefficients of the underlying equation
of a dataset, when the equation is assumed to take the form of a multivariate
Laurent Polynomial. This is facilitated by a new type of interpretable neural
network block, named the "power-term approximator block", consisting of
logarithmic and exponential activation functions. GINN-LP is end-to-end
differentiable, making it possible to use backpropagation for training. We
propose a neural network growth strategy that will enable finding the suitable
number of terms in the Laurent polynomial that represents the data, along with
sparsity regularization to promote the discovery of concise equations. To the
best of our knowledge, this is the first model that can discover arbitrary
multivariate Laurent polynomial terms without any prior information on the
order. Our approach is first evaluated on a subset of data used in SRBench, a
benchmark for symbolic regression. We first show that GINN-LP outperforms the
state-of-the-art symbolic regression methods on datasets generated using 48
real-world equations in the form of multivariate Laurent polynomials. Next, we
propose an ensemble method that combines our method with a high-performing
symbolic regression method, enabling us to discover non-Laurent polynomial
equations. We achieve state-of-the-art results in equation discovery, showing
an absolute improvement of 7.1% over the best contender, by applying this
ensemble method to 113 datasets within SRBench with known ground-truth
equations.
Related papers
- Symmetric Single Index Learning [46.7352578439663]
One popular model is the single-index model, in which labels are produced by an unknown linear projection with a possibly unknown link function.
We consider single index learning in the setting of symmetric neural networks.
arXiv Detail & Related papers (2023-10-03T14:59:00Z) - Bayesian polynomial neural networks and polynomial neural ordinary
differential equations [4.550705124365277]
Symbolic regression with neural networks and neural ordinary differential equations (ODEs) are powerful approaches for equation recovery of many science and engineering problems.
These methods provide point estimates for the model parameters and are currently unable to accommodate noisy data.
We address this challenge by developing and validating the following inference methods: the Laplace approximation, Markov Chain Monte Carlo sampling methods, and Bayesian variational inference.
arXiv Detail & Related papers (2023-08-17T05:42:29Z) - TMPNN: High-Order Polynomial Regression Based on Taylor Map
Factorization [0.0]
The paper presents a method for constructing a high-order regression based on the Taylor map factorization.
By benchmarking on UCI open access datasets, we demonstrate that the proposed method performs comparable to the state-of-the-art regression methods.
arXiv Detail & Related papers (2023-07-30T01:52:00Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Bagged Polynomial Regression and Neural Networks [0.0]
Series and dataset regression are able to approximate the same function classes as neural networks.
textitbagged regression (BPR) is an attractive alternative to neural networks.
BPR performs as well as neural networks in crop classification using satellite data.
arXiv Detail & Related papers (2022-05-17T19:55:56Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Deep Learning with Functional Inputs [0.0]
We present a methodology for integrating functional data into feed-forward neural networks.
A by-product of the method is a set of dynamic functional weights that can be visualized during the optimization process.
The model is shown to perform well in a number of contexts including prediction of new data and recovery of the true underlying functional weights.
arXiv Detail & Related papers (2020-06-17T01:23:00Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - The data-driven physical-based equations discovery using evolutionary
approach [77.34726150561087]
We describe the algorithm for the mathematical equations discovery from the given observations data.
The algorithm combines genetic programming with the sparse regression.
It could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.
arXiv Detail & Related papers (2020-04-03T17:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.