Neural Program Synthesis with Query
- URL: http://arxiv.org/abs/2205.07857v1
- Date: Sun, 8 May 2022 13:53:18 GMT
- Title: Neural Program Synthesis with Query
- Authors: Di Huang, Rui Zhang, Xing Hu, Xishan Zhang, Pengwei Jin, Nan Li,
Zidong Du, Qi Guo, Yunji Chen
- Abstract summary: We propose a query-based framework that trains a neural network to generate informative input-output examples automatically.
We evaluate the effectiveness and generalization of the proposed query-based framework on the Karel task and the list processing task.
Experimental results show that the query-based framework can generate informative input-output examples which achieve and even outperform well-designed input-output examples.
- Score: 27.212984312375166
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aiming to find a program satisfying the user intent given input-output
examples, program synthesis has attracted increasing interest in the area of
machine learning. Despite the promising performance of existing methods, most
of their success comes from the privileged information of well-designed
input-output examples. However, providing such input-output examples is
unrealistic because it requires the users to have the ability to describe the
underlying program with a few input-output examples under the training
distribution. In this work, we propose a query-based framework that trains a
query neural network to generate informative input-output examples
automatically and interactively from a large query space. The quality of the
query depends on the amount of the mutual information between the query and the
corresponding program, which can guide the optimization of the query framework.
To estimate the mutual information more accurately, we introduce the functional
space (F-space) which models the relevance between the input-output examples
and the programs in a differentiable way. We evaluate the effectiveness and
generalization of the proposed query-based framework on the Karel task and the
list processing task. Experimental results show that the query-based framework
can generate informative input-output examples which achieve and even
outperform well-designed input-output examples.
Related papers
- XPrompt:Explaining Large Language Model's Generation via Joint Prompt Attribution [26.639271355209104]
Large Language Models (LLMs) have demonstrated impressive performances in complex text generation tasks.
The contribution of the input prompt to the generated content still remains obscure to humans.
We introduce a counterfactual explanation framework based on joint prompt attribution, XPrompt.
arXiv Detail & Related papers (2024-05-30T18:16:41Z) - Localized RETE for Incremental Graph Queries [1.3858051019755282]
We propose an extension semantics that enables local yet fully incremental execution graph queries.
The proposed technique can significantly improve performance regarding memory consumption and execution time in favorable cases, but may incur a noticeable linear overhead unfavorable cases.
arXiv Detail & Related papers (2024-05-02T10:00:37Z) - Generating Pragmatic Examples to Train Neural Program Synthesizers [20.819451354452085]
A good synthesizer must choose the intended program from the many that are consistent with the given set of examples.
We propose a novel way to amortize this search with neural networks.
arXiv Detail & Related papers (2023-11-09T20:53:00Z) - From Probabilistic Programming to Complexity-based Programming [0.5874142059884521]
The paper presents the main characteristics and a preliminary implementation of a novel computational framework named CompLog.
Inspired by probabilistic programming systems like ProbLog, CompLog builds upon the inferential mechanisms proposed by Simplicity Theory.
The proposed system enables users to compute ex-post and ex-ante measures of unexpectedness of a certain situation.
arXiv Detail & Related papers (2023-07-28T10:11:01Z) - Efficient Prompting via Dynamic In-Context Learning [76.83516913735072]
We propose DynaICL, a recipe for efficient prompting with black-box generalist models.
DynaICL dynamically allocates in-context examples according to the input complexity and the computational budget.
We find that DynaICL saves up to 46% token budget compared to the common practice that allocates the same number of in-context examples to each input.
arXiv Detail & Related papers (2023-05-18T17:58:31Z) - How to Design Sample and Computationally Efficient VQA Models [53.65668097847456]
We find that representing the text as probabilistic programs and images as object-level scene graphs best satisfy these desiderata.
We extend existing models to leverage these soft programs and scene graphs to train on question answer pairs in an end-to-end manner.
arXiv Detail & Related papers (2021-03-22T01:48:16Z) - Automated Concatenation of Embeddings for Structured Prediction [75.44925576268052]
We propose Automated Concatenation of Embeddings (ACE) to automate the process of finding better concatenations of embeddings for structured prediction tasks.
We follow strategies in reinforcement learning to optimize the parameters of the controller and compute the reward based on the accuracy of a task model.
arXiv Detail & Related papers (2020-10-10T14:03:20Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z) - Information-theoretic User Interaction: Significant Inputs for Program
Synthesis [11.473616777800318]
We introduce the em significant questions problem, and show that it is hard in general.
We develop an information-theoretic greedy approach for solving the problem.
In the context of interactive program synthesis, we use the above result to develop an emactive program learner
Our active learner is able to tradeoff false negatives for false positives and converge in a small number of iterations on a real-world dataset.
arXiv Detail & Related papers (2020-06-22T21:46:40Z) - IReEn: Reverse-Engineering of Black-Box Functions via Iterative Neural
Program Synthesis [70.61283188380689]
We investigate the problem of revealing the functionality of a black-box agent.
We do not rely on privileged information on the black box, but rather investigate the problem under a weaker assumption of having only access to inputs and outputs of the program.
Our results show that the proposed approach outperforms the state-of-the-art on this challenge by finding an approximately functional equivalent program in 78% of cases.
arXiv Detail & Related papers (2020-06-18T17:50:48Z) - Creating Synthetic Datasets via Evolution for Neural Program Synthesis [77.34726150561087]
We show that some program synthesis approaches generalize poorly to data distributions different from that of the randomly generated examples.
We propose a new, adversarial approach to control the bias of synthetic data distributions and show that it outperforms current approaches.
arXiv Detail & Related papers (2020-03-23T18:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.