Grammar Filtering For Syntax-Guided Synthesis
- URL: http://arxiv.org/abs/2002.02884v1
- Date: Fri, 7 Feb 2020 16:35:50 GMT
- Title: Grammar Filtering For Syntax-Guided Synthesis
- Authors: Kairo Morton, William Hallahan, Elven Shum, Ruzica Piskac, Mark
Santolucito
- Abstract summary: We propose a system for using machine learning in tandem with automated reasoning techniques to solve PBE problems.
By preprocessing SyGuS PBE problems with a neural network, we can use a data driven approach to reduce the size of the search space.
Our system is able to run atop existing SyGuS PBE synthesis tools, decreasing the runtime of the winner of the 2019 SyGuS Competition for the PBE track by 47.65%.
- Score: 6.298766745228328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Programming-by-example (PBE) is a synthesis paradigm that allows users to
generate functions by simply providing input-output examples. While a promising
interaction paradigm, synthesis is still too slow for realtime interaction and
more widespread adoption. Existing approaches to PBE synthesis have used
automated reasoning tools, such as SMT solvers, as well as works applying
machine learning techniques. At its core, the automated reasoning approach
relies on highly domain specific knowledge of programming languages. On the
other hand, the machine learning approaches utilize the fact that when working
with program code, it is possible to generate arbitrarily large training
datasets. In this work, we propose a system for using machine learning in
tandem with automated reasoning techniques to solve Syntax Guided Synthesis
(SyGuS) style PBE problems. By preprocessing SyGuS PBE problems with a neural
network, we can use a data driven approach to reduce the size of the search
space, then allow automated reasoning-based solvers to more quickly find a
solution analytically. Our system is able to run atop existing SyGuS PBE
synthesis tools, decreasing the runtime of the winner of the 2019 SyGuS
Competition for the PBE Strings track by 47.65% to outperform all of the
competing tools.
Related papers
- Opening the AI black box: program synthesis via mechanistic
interpretability [12.849101734204456]
We present a novel method for program synthesis based on automated mechanistic interpretability of neural networks trained to perform the desired task, auto-distilling the learned algorithm into Python code.
We test MIPS on a benchmark of 62 algorithmic tasks that can be learned by an RNN and find it highly complementary to GPT-4.
As opposed to large language models, this program synthesis technique makes no use of (and is therefore not limited by) human training data such as algorithms and code from GitHub.
arXiv Detail & Related papers (2024-02-07T18:59:12Z) - LILO: Learning Interpretable Libraries by Compressing and Documenting Code [71.55208585024198]
We introduce LILO, a neurosymbolic framework that iteratively synthesizes, compresses, and documents code.
LILO combines LLM-guided program synthesis with recent algorithmic advances in automated from Stitch.
We find that AutoDoc boosts performance by helping LILO's synthesizer to interpret and deploy learned abstractions.
arXiv Detail & Related papers (2023-10-30T17:55:02Z) - Guess & Sketch: Language Model Guided Transpilation [59.02147255276078]
Learned transpilation offers an alternative to manual re-writing and engineering efforts.
Probabilistic neural language models (LMs) produce plausible outputs for every input, but do so at the cost of guaranteed correctness.
Guess & Sketch extracts alignment and confidence information from features of the LM then passes it to a symbolic solver to resolve semantic equivalence.
arXiv Detail & Related papers (2023-09-25T15:42:18Z) - Reinforcement Learning and Data-Generation for Syntax-Guided Synthesis [0.0]
We present a reinforcement-learning algorithm for SyGuS which uses Monte-Carlo Tree Search (MCTS) to search the space of candidate solutions.
Our algorithm learns policy and value functions which, combined with the upper confidence bound for trees, allow it to balance exploration and exploitation.
arXiv Detail & Related papers (2023-07-13T11:30:50Z) - A Conversational Paradigm for Program Synthesis [110.94409515865867]
We propose a conversational program synthesis approach via large language models.
We train a family of large language models, called CodeGen, on natural language and programming language data.
Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm.
arXiv Detail & Related papers (2022-03-25T06:55:15Z) - OMLT: Optimization & Machine Learning Toolkit [54.58348769621782]
The optimization and machine learning toolkit (OMLT) is an open-source software package incorporating neural network and gradient-boosted tree surrogate models.
We discuss the advances in optimization technology that made OMLT possible and show how OMLT seamlessly integrates with the algebraic modeling language Pyomo.
arXiv Detail & Related papers (2022-02-04T22:23:45Z) - Type-driven Neural Programming by Example [0.0]
We look into programming by example (PBE), which is about finding a program mapping given inputs to given outputs.
We propose a way to incorporate programming types into a neural program synthesis approach for PBE.
arXiv Detail & Related papers (2020-08-28T12:30:05Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z) - Synthesize, Execute and Debug: Learning to Repair for Neural Program
Synthesis [81.54148730967394]
We propose SED, a neural program generation framework that incorporates synthesis, execution, and debug stages.
SED first produces initial programs using the neural program synthesizer component, then utilizes a neural program debugger to iteratively repair the generated programs.
On Karel, a challenging input-output program synthesis benchmark, SED reduces the error rate of the neural program synthesizer itself by a considerable margin, and outperforms the standard beam search for decoding.
arXiv Detail & Related papers (2020-07-16T04:15:47Z) - PHOTONAI -- A Python API for Rapid Machine Learning Model Development [2.414341608751139]
PHOTONAI is a high-level Python API designed to simplify and accelerate machine learning model development.
It functions as a unifying framework allowing the user to easily access and combine algorithms from different toolboxes into custom algorithm sequences.
arXiv Detail & Related papers (2020-02-13T10:33:05Z) - Towards Neural-Guided Program Synthesis for Linear Temporal Logic
Specifications [26.547133495699093]
We use a neural network to learn a Q-function that is then used to guide search, and to construct programs that are subsequently verified for correctness.
Our method is unique in combining search with deep learning to realize synthesis.
arXiv Detail & Related papers (2019-12-31T17:09:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.