A Conversational Paradigm for Program Synthesis
- URL: http://arxiv.org/abs/2203.13474v2
- Date: Mon, 28 Mar 2022 17:10:30 GMT
- Title: A Conversational Paradigm for Program Synthesis
- Authors: Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo
Zhou, Silvio Savarese, Caiming Xiong
- Abstract summary: We propose a conversational program synthesis approach via large language models.
We train a family of large language models, called CodeGen, on natural language and programming language data.
Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm.
- Score: 110.94409515865867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Program synthesis strives to generate a computer program as a solution to a
given problem specification. We propose a conversational program synthesis
approach via large language models, which addresses the challenges of searching
over a vast program space and user intent specification faced in prior
approaches. Our new approach casts the process of writing a specification and
program as a multi-turn conversation between a user and a system. It treats
program synthesis as a sequence prediction problem, in which the specification
is expressed in natural language and the desired program is conditionally
sampled. We train a family of large language models, called CodeGen, on natural
language and programming language data. With weak supervision in the data and
the scaling up of data size and model size, conversational capacities emerge
from the simple autoregressive language modeling. To study the model behavior
on conversational program synthesis, we develop a multi-turn programming
benchmark (MTPB), where solving each problem requires multi-step synthesis via
multi-turn conversation between the user and the model. Our findings show the
emergence of conversational capabilities and the effectiveness of the proposed
conversational program synthesis paradigm. In addition, our model CodeGen (with
up to 16B parameters trained on TPU-v4) outperforms OpenAI's Codex on the
HumanEval benchmark. We plan to make the training library JaxFormer including
checkpoints available as open source.
Related papers
- Learning Phonotactics from Linguistic Informants [54.086544221761486]
Our model iteratively selects or synthesizes a data-point according to one of a range of information-theoretic policies.
We find that the information-theoretic policies that our model uses to select items to query the informant achieve sample efficiency comparable to, or greater than, fully supervised approaches.
arXiv Detail & Related papers (2024-05-08T00:18:56Z) - A Case Study in Engineering a Conversational Programming Assistant's
Persona [72.47187215119664]
Conversational capability was achieved by using an existing code-fluent Large Language Model.
A discussion of the evolution of the prompt provides a case study in how to coax an existing foundation model to behave in a desirable manner for a particular application.
arXiv Detail & Related papers (2023-01-13T14:48:47Z) - Programming by Example and Text-to-Code Translation for Conversational
Code Generation [1.8447697408534178]
We propose a method for integrating Programming by Example and text-to-code systems.
MPaTHS offers an accessible natural language interface for synthesizing general programs.
We present a program representation that allows our method to be applied to the problem of task-oriented dialogue.
arXiv Detail & Related papers (2022-11-21T15:20:45Z) - PanGu-Coder: Program Synthesis with Function-Level Language Modeling [47.63943623661298]
PanGu-Coder is a pretrained decoder-only language model adopting the PanGu-Alpha architecture for text-to-code generation.
We train PanGu-Coder using a two-stage strategy: the first stage employs Causal Language Modelling to pre-train on raw programming language data.
The second stage uses a combination of Causal Language Modelling and Masked Language Modelling to train on loosely curated pairs of natural language program definitions and code functions.
arXiv Detail & Related papers (2022-07-22T18:08:16Z) - Multi-modal Program Inference: a Marriage of Pre-trainedLanguage Models
and Component-based Synthesis [15.427687814482724]
Multi-modal program synthesis refers to the task of synthesizing programs (code) from their specification given in different forms.
Examples provide a precise but incomplete specification, and natural language provides an ambiguous but more "complete" task description.
We use our combination approach to instantiate multi-modal synthesis systems for two programming domains.
arXiv Detail & Related papers (2021-09-03T16:12:04Z) - Latent Execution for Neural Program Synthesis Beyond Domain-Specific
Languages [97.58968222942173]
We take the first step to synthesize C programs from input-output examples.
In particular, we propose La Synth, which learns the latent representation to approximate the execution of partially generated programs.
We show that training on these synthesized programs further improves the prediction performance for both Karel and C program synthesis.
arXiv Detail & Related papers (2021-06-29T02:21:32Z) - Representing Partial Programs with Blended Abstract Semantics [62.20775388513027]
We introduce a technique for representing partially written programs in a program synthesis engine.
We learn an approximate execution model implemented as a modular neural network.
We show that these hybrid neuro-symbolic representations enable execution-guided synthesizers to use more powerful language constructs.
arXiv Detail & Related papers (2020-12-23T20:40:18Z) - Optimal Neural Program Synthesis from Multimodal Specifications [45.35689345004124]
Multimodal program synthesis is an attractive way to scale program synthesis to challenging settings.
This paper proposes an optimal neural synthesis approach where the goal is to find a program that satisfies user-provided constraints.
arXiv Detail & Related papers (2020-10-04T20:51:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.