Learning Symbolic Expressions: Mixed-Integer Formulations, Cuts, and
Heuristics
- URL: http://arxiv.org/abs/2102.08351v1
- Date: Tue, 16 Feb 2021 18:39:14 GMT
- Title: Learning Symbolic Expressions: Mixed-Integer Formulations, Cuts, and
Heuristics
- Authors: Jongeun Kim, Sven Leyffer, Prasanna Balaprakash
- Abstract summary: We consider the problem of learning a regression function without assuming its functional form.
We propose a that builds an expression tree by solving a restricted MI.
- Score: 1.1602089225841632
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we consider the problem of learning a regression function
without assuming its functional form. This problem is referred to as symbolic
regression. An expression tree is typically used to represent a solution
function, which is determined by assigning operators and operands to the nodes.
The symbolic regression problem can be formulated as a nonconvex mixed-integer
nonlinear program (MINLP), where binary variables are used to assign operators
and nonlinear expressions are used to propagate data values through nonlinear
operators such as square, square root, and exponential. We extend this
formulation by adding new cuts that improve the solution of this challenging
MINLP. We also propose a heuristic that iteratively builds an expression tree
by solving a restricted MINLP. We perform computational experiments and compare
our approach with a mixed-integer program-based method and a
neural-network-based method from the literature.
Related papers
- Evolving Form and Function: Dual-Objective Optimization in Neural Symbolic Regression Networks [0.0]
We introduce a method that combines gradient descent and evolutionary computation to yield neural networks that minimize the symbolic and behavioral errors of the equations they generate from data.
These evolved networks are shown to generate more symbolically and behaviorally accurate equations than those generated by networks trained by state-of-the-art gradient based neural symbolic regression methods.
arXiv Detail & Related papers (2025-02-24T18:20:41Z) - Operator Learning Using Random Features: A Tool for Scientific Computing [3.745868534225104]
Supervised operator learning centers on the use of training data to estimate maps between infinite-dimensional spaces.
This paper introduces the function-valued random features method.
It leads to a supervised operator learning architecture that is practical for nonlinear problems.
arXiv Detail & Related papers (2024-08-12T23:10:39Z) - Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce a new framework for approximate Bayesian uncertainty quantification in neural operators.
Our approach can be interpreted as a probabilistic analogue of the concept of currying from functional programming.
We showcase the efficacy of our approach through applications to different types of partial differential equations.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - GINN-LP: A Growing Interpretable Neural Network for Discovering
Multivariate Laurent Polynomial Equations [1.1142444517901018]
We propose GINN-LP, an interpretable neural network, to discover the form of a Laurent Polynomial equation.
To the best of our knowledge, this is the first neural network that can discover arbitrary terms without any prior information on the order.
We show that GINN-LP outperforms the state-of-theart symbolic regression methods on datasets.
arXiv Detail & Related papers (2023-12-18T03:44:29Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Transformation-Interaction-Rational Representation for Symbolic
Regression [0.0]
Symbolic Regression searches for a function form that approximates a dataset often using Genetic Programming.
A novel representation called Interaction-Transformation was recently proposed to alleviate this problem.
We propose an extension to this representation that defines a new function form as the rational of two Interaction-Transformation functions.
arXiv Detail & Related papers (2022-04-25T16:53:43Z) - Learning Linearized Assignment Flows for Image Labeling [70.540936204654]
We introduce a novel algorithm for estimating optimal parameters of linearized assignment flows for image labeling.
We show how to efficiently evaluate this formula using a Krylov subspace and a low-rank approximation.
arXiv Detail & Related papers (2021-08-02T13:38:09Z) - Neural Symbolic Regression that Scales [58.45115548924735]
We introduce the first symbolic regression method that leverages large scale pre-training.
We procedurally generate an unbounded set of equations, and simultaneously pre-train a Transformer to predict the symbolic equation from a corresponding set of input-output-pairs.
arXiv Detail & Related papers (2021-06-11T14:35:22Z) - Activation Relaxation: A Local Dynamical Approximation to
Backpropagation in the Brain [62.997667081978825]
Activation Relaxation (AR) is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system.
Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, and can operate on arbitrary computation graphs.
arXiv Detail & Related papers (2020-09-11T11:56:34Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Symbolic Regression using Mixed-Integer Nonlinear Optimization [9.638685454900047]
The Symbolic Regression (SR) problem is a hard problem in machine learning.
We propose a hybrid algorithm that combines mixed-integer nonlinear optimization with explicit enumeration.
We show that our algorithm is competitive, for some synthetic data sets, with a state-of-the-art SR software and a recent physics-inspired method called AI Feynman.
arXiv Detail & Related papers (2020-06-11T20:53:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.