Solving even-parity problems using traceless genetic programming
- URL: http://arxiv.org/abs/2110.02014v1
- Date: Mon, 4 Oct 2021 13:23:32 GMT
- Title: Solving even-parity problems using traceless genetic programming
- Authors: Mihai Oltean
- Abstract summary: TGP is a hybrid technique combining a technique for building individuals and a technique for representing individuals.
Two genetic operators are used in conjunction with TGP: crossover and insertion.
TGP is applied for evolving digital circuits for the even-parity problem.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A genetic programming (GP) variant called traceless genetic programming (TGP)
is proposed in this paper. TGP is a hybrid method combining a technique for
building individuals and a technique for representing individuals. The main
difference between TGP and other GP techniques is that TGP does not explicitly
store the evolved computer programs. Two genetic operators are used in
conjunction with TGP: crossover and insertion. TGP is applied for evolving
digital circuits for the even-parity problem. Numerical experiments show that
TGP outperforms standard GP with several orders of magnitude.
Related papers
- Liquid State Genetic Programming [0.0]
A new Genetic Programming variant called Liquid State Genetic Programming (LSGP) is proposed in this paper.
LSGP is a hybrid method combining a dynamic memory for storing the inputs (the liquid) and a Genetic Programming technique used for the problem solving part.
Numerical experiments show that LSGP performs similarly and sometimes even better than standard Genetic Programming for the considered test problems.
arXiv Detail & Related papers (2023-12-05T17:09:21Z) - Thin and Deep Gaussian Processes [43.22976185646409]
This work proposes a novel synthesis of both previous approaches: Thin and Deep GP (TDGP)
We show with theoretical and experimental results that i) TDGP is tailored to specifically discover lower-dimensional manifold in the input data, ii) TDGP behaves well when increasing the number of layers, and iv) TDGP performs well in standard benchmark datasets.
arXiv Detail & Related papers (2023-10-17T18:50:24Z) - Interactive Segmentation as Gaussian Process Classification [58.44673380545409]
Click-based interactive segmentation (IS) aims to extract the target objects under user interaction.
Most of the current deep learning (DL)-based methods mainly follow the general pipelines of semantic segmentation.
We propose to formulate the IS task as a Gaussian process (GP)-based pixel-wise binary classification model on each image.
arXiv Detail & Related papers (2023-02-28T14:01:01Z) - Weighted Ensembles for Active Learning with Adaptivity [60.84896785303314]
This paper presents an ensemble of GP models with weights adapted to the labeled data collected incrementally.
Building on this novel EGP model, a suite of acquisition functions emerges based on the uncertainty and disagreement rules.
An adaptively weighted ensemble of EGP-based acquisition functions is also introduced to further robustify performance.
arXiv Detail & Related papers (2022-06-10T11:48:49Z) - Scaling Gaussian Process Optimization by Evaluating a Few Unique
Candidates Multiple Times [119.41129787351092]
We show that sequential black-box optimization based on GPs can be made efficient by sticking to a candidate solution for multiple evaluation steps.
We modify two well-established GP-Opt algorithms, GP-UCB and GP-EI to adapt rules from batched GP-Opt.
arXiv Detail & Related papers (2022-01-30T20:42:14Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - Solving classification problems using Traceless Genetic Programming [0.0]
Traceless Genetic Programming (TGP) is a new Genetic Programming (GP) that may be used for solving difficult real-world problems.
In this paper, TGP is used for solving real-world classification problems taken from PROBEN1.
arXiv Detail & Related papers (2021-10-07T06:13:07Z) - Using Traceless Genetic Programming for Solving Multiobjective
Optimization Problems [1.9493449206135294]
Traceless Genetic Programming (TGP) is a Genetic Programming (GP) variant that is used in cases where the focus is rather the output of the program than the program itself.
Two genetic operators are used in conjunction with TGP: crossover and insertion.
Numerical experiments show that TGP is able to solve very fast and very well the considered test problems.
arXiv Detail & Related papers (2021-10-07T05:55:55Z) - Deep Gaussian Process Emulation using Stochastic Imputation [0.0]
We propose a novel deep Gaussian process (DGP) inference method for computer model emulation using imputation.
Byally imputing the latent layers, the approach transforms the DGP into the linked GP, a state-of-the-art surrogate model formed by linking a system of feed-forward coupled GPs.
arXiv Detail & Related papers (2021-07-04T10:46:23Z) - SGP-DT: Semantic Genetic Programming Based on Dynamic Targets [6.841231589814175]
This paper presents a new Semantic GP approach based on Dynamic Target (SGP-DT)
The evolution in each run is guided by a new (dynamic) target based on the residual errors.
SGP-DT achieves small RMSE values, on average 23.19% smaller than the one of epsilon-lexicase.
arXiv Detail & Related papers (2020-01-30T19:33:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.