Guiding Genetic Programming with Graph Neural Networks
- URL: http://arxiv.org/abs/2411.05820v1
- Date: Sun, 03 Nov 2024 20:43:31 GMT
- Title: Guiding Genetic Programming with Graph Neural Networks
- Authors: Piotr WyrwiĆski, Krzysztof Krawiec,
- Abstract summary: We propose EvoNUDGE, which uses a graph neural network to elicit additional knowledge from symbolic regression problems.
In an extensive experiment on a large number of problem instances, EvoNUDGE is shown to significantly outperform multiple baselines.
- Score: 0.20718016474717196
- License:
- Abstract: In evolutionary computation, it is commonly assumed that a search algorithm acquires knowledge about a problem instance by sampling solutions from the search space and evaluating them with a fitness function. This is necessarily inefficient because fitness reveals very little about solutions -- yet they contain more information that can be potentially exploited. To address this observation in genetic programming, we propose EvoNUDGE, which uses a graph neural network to elicit additional knowledge from symbolic regression problems. The network is queried on the problem before an evolutionary run to produce a library of subprograms, which is subsequently used to seed the initial population and bias the actions of search operators. In an extensive experiment on a large number of problem instances, EvoNUDGE is shown to significantly outperform multiple baselines, including the conventional tree-based genetic programming and the purely neural variant of the method.
Related papers
- A Learning Search Algorithm for the Restricted Longest Common Subsequence Problem [40.64116457007417]
The Restricted Longest Common Subsequence (RLCS) problem has significant applications in bioinformatics.
This paper introduces two novel approaches designed to enhance the search process by steering it towards promising regions.
An important contribution of this paper is found in the generation of real-world instances where scientific abstracts serve as input strings.
arXiv Detail & Related papers (2024-10-15T20:02:15Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - On Recoverability of Graph Neural Network Representations [9.02766568914452]
We propose the notion of recoverability, which is tightly related to information aggregation in GNNs.
We demonstrate, through experimental results on various datasets and different GNN architectures, that estimated recoverability correlates with aggregation method expressivity and graph sparsification quality.
We believe that the proposed method could provide an essential tool for understanding the roots of the aforementioned problems, and potentially lead to a GNN design that overcomes them.
arXiv Detail & Related papers (2022-01-30T15:22:29Z) - Multigoal-oriented dual-weighted-residual error estimation using deep
neural networks [0.0]
Deep learning is considered as a powerful tool with high flexibility to approximate functions.
Our approach is based on a posteriori error estimation in which the adjoint problem is solved for the error localization.
An efficient and easy to implement algorithm is developed to obtain a posteriori error estimate for multiple goal functionals.
arXiv Detail & Related papers (2021-12-21T16:59:44Z) - Symbolic Regression via Neural-Guided Genetic Programming Population
Seeding [6.9501458586819505]
Symbolic regression is a discrete optimization problem generally believed to be NP-hard.
Prior approaches to solving the problem include neural-guided search and genetic programming.
We propose a neural-guided component used to seed the starting population of a random restart genetic programming component.
arXiv Detail & Related papers (2021-10-29T19:26:41Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - How Neural Networks Extrapolate: From Feedforward to Graph Neural
Networks [80.55378250013496]
We study how neural networks trained by gradient descent extrapolate what they learn outside the support of the training distribution.
Graph Neural Networks (GNNs) have shown some success in more complex tasks.
arXiv Detail & Related papers (2020-09-24T17:48:59Z) - Graph Neural Networks for Motion Planning [108.51253840181677]
We present two techniques, GNNs over dense fixed graphs for low-dimensional problems and sampling-based GNNs for high-dimensional problems.
We examine the ability of a GNN to tackle planning problems such as identifying critical nodes or learning the sampling distribution in Rapidly-exploring Random Trees (RRT)
Experiments with critical sampling, a pendulum and a six DoF robot arm show GNNs improve on traditional analytic methods as well as learning approaches using fully-connected or convolutional neural networks.
arXiv Detail & Related papers (2020-06-11T08:19:06Z) - Devolutionary genetic algorithms with application to the minimum
labeling Steiner tree problem [0.0]
This paper characterizes and discusses devolutionary genetic algorithms and evaluates their performances in solving the minimum labeling Steiner tree (MLST) problem.
We define devolutionary algorithms as the process of reaching a feasible solution by devolving a population of super-optimal unfeasible solutions over time.
We show how classical evolutionary concepts, such as crossing, mutation and fitness can be adapted to aim at reaching an optimal or close-to-optimal solution.
arXiv Detail & Related papers (2020-04-18T13:27:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.