Improving the efficiency of GP-GOMEA for higher-arity operators
- URL: http://arxiv.org/abs/2402.09854v1
- Date: Thu, 15 Feb 2024 10:20:40 GMT
- Title: Improving the efficiency of GP-GOMEA for higher-arity operators
- Authors: Thalea Schlender, Mafalda Malafaia, Tanja Alderliesten, Peter A.N.
Bosman
- Abstract summary: Genetic Programming (GP) can offer a way to evolve inherently interpretable expressions.
GP-GOMEA is a form of GP that has been found particularly effective at evolving expressions that are accurate yet of limited size and, thus, promote interpretability.
This paper proposes two enhancements to GP-GOMEA: semantic subtree inheritance and greedy child selection.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deploying machine learning models into sensitive domains in our society
requires these models to be explainable. Genetic Programming (GP) can offer a
way to evolve inherently interpretable expressions. GP-GOMEA is a form of GP
that has been found particularly effective at evolving expressions that are
accurate yet of limited size and, thus, promote interpretability. Despite this
strength, a limitation of GP-GOMEA is template-based. This negatively affects
its scalability regarding the arity of operators that can be used, since with
increasing operator arity, an increasingly large part of the template tends to
go unused. In this paper, we therefore propose two enhancements to GP-GOMEA:
(i) semantic subtree inheritance, which performs additional variation steps
that consider the semantic context of a subtree, and (ii) greedy child
selection, which explicitly considers parts of the template that in standard
GP-GOMEA remain unused. We compare different versions of GP-GOMEA regarding
search enhancements on a set of continuous and discontinuous regression
problems, with varying tree depths and operator sets. Experimental results show
that both proposed search enhancements have a generally positive impact on the
performance of GP-GOMEA, especially when the set of operators to choose from is
large and contains higher-arity operators.
Related papers
- A Better Multi-Objective GP-GOMEA -- But do we Need it? [0.0]
In Symbolic Regression (SR) achieving a proper balance between accuracy and interpretability remains a key challenge.<n>The Genetic Programming variant of the Gene-pool Optimal Mixing Evolutionary Algorithm (GP-GOMEA) is of particular interest as it achieves state-of-the-art performance using a template that limits the size of expressions.<n>A recently introduced expansion, modular GP-GOMEA, is capable of decomposing expressions using multiple subexpressions, further increasing chances of interpretability.<n>A multi-objective variant of GP-GOMEA exists, which can be used, for instance, to optimize for size and accuracy simultaneously
arXiv Detail & Related papers (2025-07-04T18:54:27Z) - Thinking Outside the Template with Modular GP-GOMEA [0.0]
This paper presents a modular representation for GP-GOMEA that allows multiple trees to be evolved simultaneously that can be used as (functional) subexpressions.<n>We find that our proposed approach generally outperforms single-template GP-GOMEA and can moreover uncover ground-truth expressions underlying synthetic datasets with modular subexpressions at a faster rate than GP-GOMEA without modular subexpressions.
arXiv Detail & Related papers (2025-05-02T13:29:55Z) - A Functional Analysis Approach to Symbolic Regression [0.990319860068191]
Symbolic regression (SR) poses a significant challenge for randomized searchs.
Traditional genetic programming (GP) algorithms exhibit limited performance when tree-based representations are used for SR.
We introduce a novel SR approach that draws insights from functional analysis.
arXiv Detail & Related papers (2024-02-09T10:24:47Z) - Domain Invariant Learning for Gaussian Processes and Bayesian
Exploration [39.83530605880014]
We propose a domain invariant learning algorithm for Gaussian processes (DIL-GP) with a min-max optimization on the likelihood.
Numerical experiments demonstrate the superiority of DIL-GP for predictions on several synthetic and real-world datasets.
arXiv Detail & Related papers (2023-12-18T16:13:34Z) - Differentiable Genetic Programming for High-dimensional Symbolic
Regression [13.230237932229052]
Symbolic regression (SR) is considered an effective way to reach interpretable machine learning (ML)
Genetic programming (GP) has been the dominator in solving SR problems.
We propose a differentiable approach named DGP to construct GP trees towards high-dimensional SR.
arXiv Detail & Related papers (2023-04-18T11:39:45Z) - Interactive Segmentation as Gaussian Process Classification [58.44673380545409]
Click-based interactive segmentation (IS) aims to extract the target objects under user interaction.
Most of the current deep learning (DL)-based methods mainly follow the general pipelines of semantic segmentation.
We propose to formulate the IS task as a Gaussian process (GP)-based pixel-wise binary classification model on each image.
arXiv Detail & Related papers (2023-02-28T14:01:01Z) - A Lanczos approach to the Adiabatic Gauge Potential [0.0]
The Adiabatic Gauge Potential (AGP) measures the rate at which the eigensystem of Hamiltonian changes under adiabatic deformations.
We employ a version of this approach by using the Lanczos algorithm to evaluate the AGP operator in terms of Krylov vectors and the AGP norm in terms of the Lanczos coefficients.
arXiv Detail & Related papers (2023-02-14T18:18:21Z) - Scaling Gaussian Process Optimization by Evaluating a Few Unique
Candidates Multiple Times [119.41129787351092]
We show that sequential black-box optimization based on GPs can be made efficient by sticking to a candidate solution for multiple evaluation steps.
We modify two well-established GP-Opt algorithms, GP-UCB and GP-EI to adapt rules from batched GP-Opt.
arXiv Detail & Related papers (2022-01-30T20:42:14Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - Sparse Gaussian Process Variational Autoencoders [24.86751422740643]
Existing approaches for performing inference in GP-DGMs do not support sparse GP approximations based on points.
We develop the sparse Gaussian processal variation autoencoder (GP-VAE) characterised by the use of partial inference networks for parameterising sparse GP approximations.
arXiv Detail & Related papers (2020-10-20T10:19:56Z) - Likelihood-Free Inference with Deep Gaussian Processes [70.74203794847344]
Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
arXiv Detail & Related papers (2020-06-18T14:24:05Z) - Interpretable Learning-to-Rank with Generalized Additive Models [78.42800966500374]
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area.
Recent progress on interpretable ranking models largely focuses on generating post-hoc explanations for existing black-box ranking models.
We lay the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks.
arXiv Detail & Related papers (2020-05-06T01:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.