Coefficient Mutation in the Gene-pool Optimal Mixing Evolutionary
Algorithm for Symbolic Regression
- URL: http://arxiv.org/abs/2204.12159v1
- Date: Tue, 26 Apr 2022 08:58:47 GMT
- Title: Coefficient Mutation in the Gene-pool Optimal Mixing Evolutionary
Algorithm for Symbolic Regression
- Authors: Marco Virgolin, Peter A. N. Bosman
- Abstract summary: GP-GOMEA is a top-performing algorithm for symbolic regression.
We show how simple approaches for optimizing coefficients can be integrated into GP-GOMEA.
We find that coefficient mutation can help re-discover the underlying equation by a substantial amount.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Currently, the genetic programming version of the gene-pool optimal mixing
evolutionary algorithm (GP-GOMEA) is among the top-performing algorithms for
symbolic regression (SR). A key strength of GP-GOMEA is its way of performing
variation, which dynamically adapts to the emergence of patterns in the
population. However, GP-GOMEA lacks a mechanism to optimize coefficients. In
this paper, we study how fairly simple approaches for optimizing coefficients
can be integrated into GP-GOMEA. In particular, we considered two variants of
Gaussian coefficient mutation. We performed experiments using different
settings on 23 benchmark problems, and used machine learning to estimate what
aspects of coefficient mutation matter most. We find that the most important
aspect is that the number of coefficient mutation attempts needs to be
commensurate with the number of mixing operations that GP-GOMEA performs. We
applied GP-GOMEA with the best-performing coefficient mutation approach to the
data sets of SRBench, a large SR benchmark, for which a ground-truth underlying
equation is known. We find that coefficient mutation can help re-discovering
the underlying equation by a substantial amount, but only when no noise is
added to the target variable. In the presence of noise, GP-GOMEA with
coefficient mutation discovers alternative but similarly-accurate equations.
Related papers
- A convex formulation of covariate-adjusted Gaussian graphical models via natural parametrization [6.353176264090468]
In co-expression quantitative locus (eQTL) studies, both the mean expression level of genes as well as their conditional independence structure may be adjusted by variants local to those genes.
Existing methods to estimate co-adjusted GGMs either allow only the mean to depend on covariates or suffer from poor scaling assumptions due to the inherent non-wiseity of simultaneously estimating the mean and precision matrix.
arXiv Detail & Related papers (2024-10-08T20:02:10Z) - Sparse Gaussian Process Hyperparameters: Optimize or Integrate? [5.949779668853556]
We propose an algorithm for sparse Gaussian process regression which leverages MCMC to sample from the hyperparameter posterior.
We compare this scheme against natural baselines in literature along with variational GPs (SVGPs) along with an extensive computational analysis.
arXiv Detail & Related papers (2022-11-04T14:06:59Z) - Manifold Gaussian Variational Bayes on the Precision Matrix [70.44024861252554]
We propose an optimization algorithm for Variational Inference (VI) in complex models.
We develop an efficient algorithm for Gaussian Variational Inference whose updates satisfy the positive definite constraint on the variational covariance matrix.
Due to its black-box nature, MGVBP stands as a ready-to-use solution for VI in complex models.
arXiv Detail & Related papers (2022-10-26T10:12:31Z) - Computationally-efficient initialisation of GPs: The generalised
variogram method [1.0312968200748118]
Our strategy can be used as a pretraining stage to find initial conditions for maximum-likelihood (ML) training.
We provide experimental validation in terms of accuracy, consistency with ML and computational complexity for different kernels using synthetic and real-world data.
arXiv Detail & Related papers (2022-10-11T12:13:21Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Effective Mutation Rate Adaptation through Group Elite Selection [50.88204196504888]
This paper introduces the Group Elite Selection of Mutation Rates (GESMR) algorithm.
GESMR co-evolves a population of solutions and a population of MRs, such that each MR is assigned to a group of solutions.
With the same number of function evaluations and with almost no overhead, GESMR converges faster and to better solutions than previous approaches.
arXiv Detail & Related papers (2022-04-11T01:08:26Z) - Scaling Gaussian Process Optimization by Evaluating a Few Unique
Candidates Multiple Times [119.41129787351092]
We show that sequential black-box optimization based on GPs can be made efficient by sticking to a candidate solution for multiple evaluation steps.
We modify two well-established GP-Opt algorithms, GP-UCB and GP-EI to adapt rules from batched GP-Opt.
arXiv Detail & Related papers (2022-01-30T20:42:14Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Genetic Programming is Naturally Suited to Evolve Bagging Ensembles [0.0]
We show that minor changes to fitness evaluation and selection are sufficient to make a simple and otherwise-traditional GP algorithm evolve efficiently.
Our algorithm fares very well against state-of-the-art ensemble and non-ensemble GP algorithms.
arXiv Detail & Related papers (2020-09-13T16:28:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.