Assessing the Effectiveness of Syntactic Structure to Learn Code Edit
Representations
- URL: http://arxiv.org/abs/2106.06110v1
- Date: Fri, 11 Jun 2021 01:23:07 GMT
- Title: Assessing the Effectiveness of Syntactic Structure to Learn Code Edit
Representations
- Authors: Syed Arbaaz Qureshi, Sonu Mehta, Ranjita Bhagwan, Rahul Kumar
- Abstract summary: We use structural information from Abstract Syntax Tree (AST) to represent source code edits.
Inspired by the code2seq approach, we evaluate how using structural information from AST can help with the task of code edit classification.
- Score: 2.1793134762413433
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent times, it has been shown that one can use code as data to aid
various applications such as automatic commit message generation, automatic
generation of pull request descriptions and automatic program repair. Take for
instance the problem of commit message generation. Treating source code as a
sequence of tokens, state of the art techniques generate commit messages using
neural machine translation models. However, they tend to ignore the syntactic
structure of programming languages.
Previous work, i.e., code2seq has used structural information from Abstract
Syntax Tree (AST) to represent source code and they use it to automatically
generate method names. In this paper, we elaborate upon this state of the art
approach and modify it to represent source code edits. We determine the effect
of using such syntactic structure for the problem of classifying code edits.
Inspired by the code2seq approach, we evaluate how using structural information
from AST, i.e., paths between AST leaf nodes can help with the task of code
edit classification on two datasets of fine-grained syntactic edits.
Our experiments shows that attempts of adding syntactic structure does not
result in any improvements over less sophisticated methods. The results suggest
that techniques such as code2seq, while promising, have a long way to go before
they can be generically applied to learning code edit representations. We hope
that these results will benefit other researchers and inspire them to work
further on this problem.
Related papers
- LILO: Learning Interpretable Libraries by Compressing and Documenting Code [71.55208585024198]
We introduce LILO, a neurosymbolic framework that iteratively synthesizes, compresses, and documents code.
LILO combines LLM-guided program synthesis with recent algorithmic advances in automated from Stitch.
We find that AutoDoc boosts performance by helping LILO's synthesizer to interpret and deploy learned abstractions.
arXiv Detail & Related papers (2023-10-30T17:55:02Z) - Outline, Then Details: Syntactically Guided Coarse-To-Fine Code
Generation [61.50286000143233]
ChainCoder is a program synthesis language model that generates Python code progressively.
A tailored transformer architecture is leveraged to jointly encode the natural language descriptions and syntactically aligned I/O data samples.
arXiv Detail & Related papers (2023-04-28T01:47:09Z) - Soft-Labeled Contrastive Pre-training for Function-level Code
Representation [127.71430696347174]
We present textbfSCodeR, a textbfSoft-labeled contrastive pre-training framework with two positive sample construction methods.
Considering the relevance between codes in a large-scale code corpus, the soft-labeled contrastive pre-training can obtain fine-grained soft-labels.
SCodeR achieves new state-of-the-art performance on four code-related tasks over seven datasets.
arXiv Detail & Related papers (2022-10-18T05:17:37Z) - UniXcoder: Unified Cross-Modal Pre-training for Code Representation [65.6846553962117]
We present UniXcoder, a unified cross-modal pre-trained model for programming language.
We propose a one-to-one mapping method to transform AST in a sequence structure that retains all structural information from the tree.
We evaluate UniXcoder on five code-related tasks over nine datasets.
arXiv Detail & Related papers (2022-03-08T04:48:07Z) - ECMG: Exemplar-based Commit Message Generation [45.54414179533286]
Commit messages concisely describe the content of code diffs (i.e., code changes) and the intent behind them.
The information retrieval-based methods reuse the commit messages of similar code diffs, while the neural-based methods learn the semantic connection between code diffs and commit messages.
We propose a novel exemplar-based neural commit message generation model, which treats the similar commit message as an exemplar and leverages it to guide the neural network model to generate an accurate commit message.
arXiv Detail & Related papers (2022-03-05T10:55:15Z) - Code Search based on Context-aware Code Translation [9.346066889885684]
Existing techniques leverage deep learning models to construct embedding representations for code snippets and queries.
We propose a novel context-aware code translation technique that translates code snippets into natural language descriptions.
We evaluate the effectiveness of our technique, called TranCS, on the CodeSearchNet corpus with 1,000 queries.
arXiv Detail & Related papers (2022-02-16T12:45:47Z) - Contrastive Learning for Source Code with Structural and Functional
Properties [66.10710134948478]
We present BOOST, a novel self-supervised model to focus pre-training based on the characteristics of source code.
We employ automated, structure-guided code transformation algorithms that generate functionally equivalent code that looks drastically different from the original one.
We train our model in a way that brings the functionally equivalent code closer and distinct code further through a contrastive learning objective.
arXiv Detail & Related papers (2021-10-08T02:56:43Z) - GraphCodeBERT: Pre-training Code Representations with Data Flow [97.00641522327699]
We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code.
We use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables.
We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement.
arXiv Detail & Related papers (2020-09-17T15:25:56Z) - CoreGen: Contextualized Code Representation Learning for Commit Message
Generation [39.383390029545865]
We propose a novel Contextualized code representation learning strategy for commit message Generation (CoreGen)
Experiments on the benchmark dataset demonstrate the superior effectiveness of our model over the baseline models with at least 28.18% improvement in terms of BLEU-4 score.
arXiv Detail & Related papers (2020-07-14T09:43:26Z) - Fact-based Text Editing [11.115292572080131]
textscFactEditor edits a draft text by referring to given facts using a buffer, a stream, and a memory.
textscFactEditor conducts inference faster than the encoder-decoder approach.
arXiv Detail & Related papers (2020-07-02T06:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.