Relevance Transformer: Generating Concise Code Snippets with Relevance
Feedback
- URL: http://arxiv.org/abs/2007.02609v2
- Date: Tue, 8 Dec 2020 16:33:43 GMT
- Title: Relevance Transformer: Generating Concise Code Snippets with Relevance
Feedback
- Authors: Carlos Gemmell, Federico Rossetto, Jeffrey Dalton
- Abstract summary: We introduce and study modern Transformer architectures for explicit code generation.
We propose a new model called the Relevance Transformer that incorporates external knowledge using pseudo-relevance feedback.
The results show improvements over state-of-the-art methods based on BLEU evaluation.
- Score: 6.230751621285322
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tools capable of automatic code generation have the potential to augment
programmer's capabilities. While straightforward code retrieval is incorporated
into many IDEs, an emerging area is explicit code generation. Code generation
is currently approached as a Machine Translation task, with Recurrent Neural
Network (RNN) based encoder-decoder architectures trained on code-description
pairs. In this work we introduce and study modern Transformer architectures for
this task. We further propose a new model called the Relevance Transformer that
incorporates external knowledge using pseudo-relevance feedback. The Relevance
Transformer biases the decoding process to be similar to existing retrieved
code while enforcing diversity. We perform experiments on multiple standard
benchmark datasets for code generation including Django, Hearthstone, and
CoNaLa. The results show improvements over state-of-the-art methods based on
BLEU evaluation. The Relevance Transformer model shows the potential of
Transformer-based architectures for code generation and introduces a method of
incorporating pseudo-relevance feedback during inference.
Related papers
- CodeRAG-Bench: Can Retrieval Augment Code Generation? [78.37076502395699]
We conduct a systematic, large-scale analysis of code generation using retrieval-augmented generation.
We first curate a comprehensive evaluation benchmark, CodeRAG-Bench, encompassing three categories of code generation tasks.
We examine top-performing models on CodeRAG-Bench by providing contexts retrieved from one or multiple sources.
arXiv Detail & Related papers (2024-06-20T16:59:52Z) - kTrans: Knowledge-Aware Transformer for Binary Code Embedding [15.361622199889263]
We propose a novel Transformer-based approach, namely kTrans, to generate knowledge-aware binary code embedding.
We inspect the generated embeddings with outlier detection and visualization, and also apply kTrans to 3 downstream tasks: Binary Code Similarity Detection (BCSD), Function Type Recovery (FTR) and Indirect Call Recognition (ICR)
Evaluation results show that kTrans can generate high-quality binary code embeddings, and outperforms state-of-the-art (SOTA) approaches on downstream tasks by 5.2%, 6.8%, and 12.6% respectively.
arXiv Detail & Related papers (2023-08-24T09:07:11Z) - Planning with Large Language Models for Code Generation [100.07232672883897]
Planning-Guided Transformer Decoding (PG-TD) uses a planning algorithm to do lookahead search and guide the Transformer to generate better programs.
We empirically evaluate our framework with several large language models as backbones on public coding challenge benchmarks.
arXiv Detail & Related papers (2023-03-09T18:59:47Z) - Transformer with Tree-order Encoding for Neural Program Generation [8.173517923612426]
We introduce a tree-based positional encoding and a shared natural-language subword vocabulary for Transformers.
Our findings suggest that employing a tree-based positional encoding in combination with a shared natural-language subword vocabulary improves generation performance over sequential positional encodings.
arXiv Detail & Related papers (2022-05-30T12:27:48Z) - GypSum: Learning Hybrid Representations for Code Summarization [21.701127410434914]
GypSum is a new deep learning model that learns hybrid representations using graph attention neural networks and a pre-trained programming and natural language model.
We modify the encoder-decoder sublayer in the Transformer's decoder to fuse the representations and propose a dual-copy mechanism to facilitate summary generation.
arXiv Detail & Related papers (2022-04-26T07:44:49Z) - Error Correction Code Transformer [92.10654749898927]
We propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths.
We encode each channel's output dimension to high dimension for better representation of the bits information to be processed separately.
The proposed approach demonstrates the extreme power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins at a fraction of their time complexity.
arXiv Detail & Related papers (2022-03-27T15:25:58Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z) - Thinking Like Transformers [64.96770952820691]
We propose a computational model for the transformer-encoder in the form of a programming language.
We show how RASP can be used to program solutions to tasks that could conceivably be learned by a Transformer.
We provide RASP programs for histograms, sorting, and Dyck-languages.
arXiv Detail & Related papers (2021-06-13T13:04:46Z) - Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation [63.46694853953092]
Swin-Unet is an Unet-like pure Transformer for medical image segmentation.
tokenized image patches are fed into the Transformer-based U-shaped decoder-Decoder architecture.
arXiv Detail & Related papers (2021-05-12T09:30:26Z) - Code Structure Guided Transformer for Source Code Summarization [17.512699897227055]
Transformer-based approaches do not explicitly incorporate the code structure information which is important for capturing code semantics.
We propose a novel approach named SG-Trans to incorporate code structural properties into Transformer.
arXiv Detail & Related papers (2021-04-19T14:26:56Z) - Retrieve and Refine: Exemplar-based Neural Comment Generation [27.90756259321855]
Comments of similar code snippets are helpful for comment generation.
We design a novel seq2seq neural network that takes the given code, its AST, its similar code, and its exemplar as input.
We evaluate our approach on a large-scale Java corpus, which contains about 2M samples.
arXiv Detail & Related papers (2020-10-09T09:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.