Genetic Programming with Local Scoring
- URL: http://arxiv.org/abs/2211.17234v1
- Date: Wed, 30 Nov 2022 18:36:42 GMT
- Title: Genetic Programming with Local Scoring
- Authors: Max Vistrup
- Abstract summary: We present several new techniques for evolving code through sequences of mutations.
Among these are (1) a method of local scoring assigning a score to each expression in a program, (2) suppose-expressions which act as an intermediate step to evolving if-conditionals, and (3) cyclic evolution in which we evolve programs through phases of expansion and reduction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present several new techniques for evolving code through sequences of
mutations. Among these are (1) a method of local scoring assigning a score to
each expression in a program, allowing us to more precisely identify buggy
code, (2) suppose-expressions which act as an intermediate step to evolving
if-conditionals, and (3) cyclic evolution in which we evolve programs through
phases of expansion and reduction. To demonstrate their merits, we provide a
basic proof-of-concept implementation which we show evolves correct code for
several functions manipulating integers and lists, including some that are
intractable by means of existing Genetic Programming techniques.
Related papers
- Recursive Visual Programming [53.76415744371285]
We propose Recursive Visual Programming (RVP), which simplifies generated routines, provides more efficient problem solving, and can manage more complex data structures.
We show RVP's efficacy through extensive experiments on benchmarks including VSR, COVR, GQA, and NextQA.
arXiv Detail & Related papers (2023-12-04T17:27:24Z) - Discrete Graph Auto-Encoder [52.50288418639075]
We introduce a new framework named Discrete Graph Auto-Encoder (DGAE)
We first use a permutation-equivariant auto-encoder to convert graphs into sets of discrete latent node representations.
In the second step, we sort the sets of discrete latent representations and learn their distribution with a specifically designed auto-regressive model.
arXiv Detail & Related papers (2023-06-13T12:40:39Z) - Reinforcement Learning for Mutation Operator Selection in Automated Program Repair [11.756822700775668]
We investigate the feasibility of a reinforcement learning-based approach for the selection of mutation operators in program repair.
Our proposed approach is language, programming-level, and search strategy and allows for easy augmentation into existing-based repair tools.
We evaluate our approach on 353 real-world bugs from the Defects4J benchmark.
arXiv Detail & Related papers (2023-06-09T10:09:16Z) - NAPG: Non-Autoregressive Program Generation for Hybrid Tabular-Textual
Question Answering [52.10214317661547]
Current numerical reasoning methods autoregressively decode program sequences.
The accuracy of program generation drops sharply as the decoding steps unfold due to error propagation.
In this paper, we propose a non-autoregressive program generation framework.
arXiv Detail & Related papers (2022-11-07T11:25:21Z) - Evolutionary Programmer: Autonomously Creating Path Planning Programs
based on Evolutionary Algorithms [2.9091164466276984]
A first-of-its-kind machine learning method named Evolutionary Programmer is proposed to solve this problem.
The new method recomposes the operators to a integrated planner, thus, the most suitable operators can be selected for adapting to the changing circumstances.
arXiv Detail & Related papers (2022-03-30T12:22:14Z) - Iterative Genetic Improvement: Scaling Stochastic Program Synthesis [11.195558777385259]
Program synthesis aims to it automatically find programs from an underlying programming language that satisfy a given specification.
Existing program synthesis techniques do not meet this expectation very well, suffering from the scalability issue.
Here we propose a new framework for program synthesis, called iterative genetic improvement to overcome this problem.
arXiv Detail & Related papers (2022-02-26T02:00:35Z) - Discovering Non-monotonic Autoregressive Orderings with Variational
Inference [67.27561153666211]
We develop an unsupervised parallelizable learner that discovers high-quality generation orders purely from training data.
We implement the encoder as a Transformer with non-causal attention that outputs permutations in one forward pass.
Empirical results in language modeling tasks demonstrate that our method is context-aware and discovers orderings that are competitive with or even better than fixed orders.
arXiv Detail & Related papers (2021-10-27T16:08:09Z) - Self-Supervised Learning to Prove Equivalence Between Straight-Line
Programs via Rewrite Rules [9.1570563482476]
Two programs are equivalent if there exists a sequence of application of rewrite rules that leads to rewriting one program into the other.
We propose a neural network architecture based on a transformer model to generate proofs of equivalence between program pairs.
Our system, S4Eq, achieves 97% proof success on a curated dataset of 10,000 pairs of equivalent programs.
arXiv Detail & Related papers (2021-09-22T01:37:08Z) - Searching for More Efficient Dynamic Programs [61.79535031840558]
We describe a set of program transformations, a simple metric for assessing the efficiency of a transformed program, and a search procedure to improve this metric.
We show that in practice, automated search can find substantial improvements to the initial program.
arXiv Detail & Related papers (2021-09-14T20:52:55Z) - Latent Programmer: Discrete Latent Codes for Program Synthesis [56.37993487589351]
In many sequence learning tasks, such as program synthesis and document summarization, a key problem is searching over a large space of possible output sequences.
We propose to learn representations of the outputs that are specifically meant for search: rich enough to specify the desired output but compact enough to make search more efficient.
We introduce the emphLatent Programmer, a program synthesis method that first predicts a discrete latent code from input/output examples, and then generates the program in the target language.
arXiv Detail & Related papers (2020-12-01T10:11:35Z) - Obtaining Basic Algebra Formulas with Genetic Programming and Functional
Rewriting [0.0]
We use functional programming rewriting to boost inductive genetic programming.
Parents are selected following a tournament selection mechanism and the next population is obtained following a steady-state strategy.
We compare the performance of our technique in a set of hard problems (for classical genetic programming)
arXiv Detail & Related papers (2020-05-03T23:32:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.