Type-driven Neural Programming by Example
- URL: http://arxiv.org/abs/2008.12613v5
- Date: Thu, 17 Sep 2020 09:51:08 GMT
- Title: Type-driven Neural Programming by Example
- Authors: Kiara Grouwstra
- Abstract summary: We look into programming by example (PBE), which is about finding a program mapping given inputs to given outputs.
We propose a way to incorporate programming types into a neural program synthesis approach for PBE.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In this thesis we look into programming by example (PBE), which is about
finding a program mapping given inputs to given outputs. PBE has traditionally
seen a split between formal versus neural approaches, where formal approaches
typically involve deductive techniques such as SAT solvers and types, while the
neural approaches involve training on sample input-outputs with their
corresponding program, typically using sequence-based machine learning
techniques such as LSTMs [41]. As a result of this split, programming types had
yet to be used in neural program synthesis techniques.
We propose a way to incorporate programming types into a neural program
synthesis approach for PBE. We introduce the Typed Neuro-Symbolic Program
Synthesis (TNSPS) method based on this idea, and test it in the functional
programming context to empirically verify type information may help improve
generalization in neural synthesizers on limited-size datasets.
Our TNSPS model builds upon the existing Neuro-Symbolic Program Synthesis
(NSPS), a tree-based neural synthesizer combining info from input-output
examples plus the current program, by further exposing information on types of
those input-output examples, of the grammar production rules, as well as of the
hole that we wish to expand in the program.
We further explain how we generated a dataset within our domain, which uses a
limited subset of Haskell as the synthesis language. Finally we discuss several
topics of interest that may help take these ideas further. For reproducibility,
we release our code publicly.
Related papers
- The Predictive Forward-Forward Algorithm [79.07468367923619]
We propose the predictive forward-forward (PFF) algorithm for conducting credit assignment in neural systems.
We design a novel, dynamic recurrent neural system that learns a directed generative circuit jointly and simultaneously with a representation circuit.
PFF efficiently learns to propagate learning signals and updates synapses with forward passes only.
arXiv Detail & Related papers (2023-01-04T05:34:48Z) - A Conversational Paradigm for Program Synthesis [110.94409515865867]
We propose a conversational program synthesis approach via large language models.
We train a family of large language models, called CodeGen, on natural language and programming language data.
Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm.
arXiv Detail & Related papers (2022-03-25T06:55:15Z) - Latent Execution for Neural Program Synthesis Beyond Domain-Specific
Languages [97.58968222942173]
We take the first step to synthesize C programs from input-output examples.
In particular, we propose La Synth, which learns the latent representation to approximate the execution of partially generated programs.
We show that training on these synthesized programs further improves the prediction performance for both Karel and C program synthesis.
arXiv Detail & Related papers (2021-06-29T02:21:32Z) - Toward Neural-Network-Guided Program Synthesis and Verification [26.706421573322952]
We propose a novel framework of program and invariant synthesis called neural network-guided synthesis.
We first show that, by designing and training neural networks, we can extract logical formulas over integers from the weights and biases of the trained neural networks.
Based on the idea, we have implemented a tool to synthesize formulas from positive/negative examples and implication constraints.
arXiv Detail & Related papers (2021-03-17T03:09:05Z) - Representing Partial Programs with Blended Abstract Semantics [62.20775388513027]
We introduce a technique for representing partially written programs in a program synthesis engine.
We learn an approximate execution model implemented as a modular neural network.
We show that these hybrid neuro-symbolic representations enable execution-guided synthesizers to use more powerful language constructs.
arXiv Detail & Related papers (2020-12-23T20:40:18Z) - Code Building Genetic Programming [0.0]
We introduce Code Building Genetic Programming (CBGP) as a framework within which this can be done.
CBGP produces a computational graph that can be executed or translated into source code of a host language.
arXiv Detail & Related papers (2020-08-09T04:33:04Z) - Synthesize, Execute and Debug: Learning to Repair for Neural Program
Synthesis [81.54148730967394]
We propose SED, a neural program generation framework that incorporates synthesis, execution, and debug stages.
SED first produces initial programs using the neural program synthesizer component, then utilizes a neural program debugger to iteratively repair the generated programs.
On Karel, a challenging input-output program synthesis benchmark, SED reduces the error rate of the neural program synthesizer itself by a considerable margin, and outperforms the standard beam search for decoding.
arXiv Detail & Related papers (2020-07-16T04:15:47Z) - PLANS: Robust Program Learning from Neurally Inferred Specifications [0.0]
Rule-based approaches offer correctness guarantees in an unsupervised way, while neural models are more realistically scalable to raw, high-dimensional input.
We introduce PLANS, a hybrid model for program synthesis from visual observations.
We obtain state-of-the-art performance at program synthesis from diverse demonstration videos in the Karel and ViZDoom environments.
arXiv Detail & Related papers (2020-06-05T08:51:34Z) - Synthetic Datasets for Neural Program Synthesis [66.20924952964117]
We propose a new methodology for controlling and evaluating the bias of synthetic data distributions over both programs and specifications.
We demonstrate, using the Karel DSL and a small Calculator DSL, that training deep networks on these distributions leads to improved cross-distribution generalization performance.
arXiv Detail & Related papers (2019-12-27T21:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.