A Comparison of Recent Algorithms for Symbolic Regression to Genetic Programming
- URL: http://arxiv.org/abs/2406.03585v1
- Date: Wed, 5 Jun 2024 19:01:43 GMT
- Title: A Comparison of Recent Algorithms for Symbolic Regression to Genetic Programming
- Authors: Yousef A. Radwan, Gabriel Kronberger, Stephan Winkler,
- Abstract summary: Symbolic regression aims to model and map data in a way that can be understood by scientists.
Recent advancements, have attempted to bridge the gap between these two fields.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Symbolic regression is a machine learning method with the goal to produce interpretable results. Unlike other machine learning methods such as, e.g. random forests or neural networks, which are opaque, symbolic regression aims to model and map data in a way that can be understood by scientists. Recent advancements, have attempted to bridge the gap between these two fields; new methodologies attempt to fuse the mapping power of neural networks and deep learning techniques with the explanatory power of symbolic regression. In this paper, we examine these new emerging systems and test the performance of an end-to-end transformer model for symbolic regression versus the reigning traditional methods based on genetic programming that have spearheaded symbolic regression throughout the years. We compare these systems on novel datasets to avoid bias to older methods who were improved on well-known benchmark datasets. Our results show that traditional GP methods as implemented e.g., by Operon still remain superior to two recently published symbolic regression methods.
Related papers
- Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - Deep Generative Symbolic Regression with Monte-Carlo-Tree-Search [29.392036559507755]
Symbolic regression is a problem of learning a symbolic expression from numerical data.
Deep neural models trained on procedurally-generated synthetic datasets showed competitive performance.
We propose a novel method which provides the best of both worlds, based on a Monte-Carlo Tree Search procedure.
arXiv Detail & Related papers (2023-02-22T09:10:20Z) - Toward Physically Plausible Data-Driven Models: A Novel Neural Network
Approach to Symbolic Regression [2.7071541526963805]
This paper proposes a novel neural network-based symbolic regression method.
It constructs physically plausible models based on even very small training data sets and prior knowledge about the system.
We experimentally evaluate the approach on four test systems: the TurtleBot 2 mobile robot, the magnetic manipulation system, the equivalent resistance of two resistors in parallel, and the longitudinal force of the anti-lock braking system.
arXiv Detail & Related papers (2023-02-01T22:05:04Z) - What learning algorithm is in-context learning? Investigations with
linear models [87.91612418166464]
We investigate the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly.
We show that trained in-context learners closely match the predictors computed by gradient descent, ridge regression, and exact least-squares regression.
Preliminary evidence that in-context learners share algorithmic features with these predictors.
arXiv Detail & Related papers (2022-11-28T18:59:51Z) - Interpretable Scientific Discovery with Symbolic Regression: A Review [8.414043731621419]
Symbolic regression is emerging as a promising machine learning method for learning mathematical expressions directly from data.
This survey presents a structured and comprehensive overview of symbolic regression methods and discusses their strengths and limitations.
arXiv Detail & Related papers (2022-11-20T05:12:39Z) - Benchmarking Node Outlier Detection on Graphs [90.29966986023403]
Graph outlier detection is an emerging but crucial machine learning task with numerous applications.
We present the first comprehensive unsupervised node outlier detection benchmark for graphs called UNOD.
arXiv Detail & Related papers (2022-06-21T01:46:38Z) - A Reinforcement Learning Approach to Domain-Knowledge Inclusion Using
Grammar Guided Symbolic Regression [0.0]
We propose a Reinforcement-Based Grammar-Guided Symbolic Regression (RBG2-SR) method.
RBG2-SR constrains the representational space with domain-knowledge using context-free grammar as reinforcement action space.
We show that our method is competitive against other state-of-the-art methods on the benchmarks and offers the best error-complexity trade-off.
arXiv Detail & Related papers (2022-02-09T10:13:14Z) - Contemporary Symbolic Regression Methods and their Relative Performance [5.285811942108162]
We assess 14 symbolic regression methods and 7 machine learning methods on a set of 252 diverse regression problems.
For the real-world datasets, we benchmark the ability of each method to learn models with low error and low complexity.
For the synthetic problems, we assess each method's ability to find exact solutions in the presence of varying levels of noise.
arXiv Detail & Related papers (2021-07-29T22:12:59Z) - Neural Symbolic Regression that Scales [58.45115548924735]
We introduce the first symbolic regression method that leverages large scale pre-training.
We procedurally generate an unbounded set of equations, and simultaneously pre-train a Transformer to predict the symbolic equation from a corresponding set of input-output-pairs.
arXiv Detail & Related papers (2021-06-11T14:35:22Z) - Learning Gaussian Graphical Models with Latent Confounders [74.72998362041088]
We compare and contrast two strategies for inference in graphical models with latent confounders.
While these two approaches have similar goals, they are motivated by different assumptions about confounding.
We propose a new method, which combines the strengths of these two approaches.
arXiv Detail & Related papers (2021-05-14T00:53:03Z) - Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning [134.77207192945053]
Prior methods learn the neural-symbolic models using reinforcement learning approaches.
We introduce the textbfgrammar model as a textitsymbolic prior to bridge neural perception and symbolic reasoning.
We propose a novel textbfback-search algorithm which mimics the top-down human-like learning procedure to propagate the error.
arXiv Detail & Related papers (2020-06-11T17:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.