Long-Range Modeling of Source Code Files with eWASH: Extended Window
Access by Syntax Hierarchy
- URL: http://arxiv.org/abs/2109.08780v1
- Date: Fri, 17 Sep 2021 23:11:57 GMT
- Title: Long-Range Modeling of Source Code Files with eWASH: Extended Window
Access by Syntax Hierarchy
- Authors: Colin B. Clement, Shuai Lu, Xiaoyu Liu, Michele Tufano, Dawn Drain,
Nan Duan, Neel Sundaresan, Alexey Svyatkovskiy
- Abstract summary: We introduce an architecture-independent approach for leveraging entire file-level context into a fixed-length window.
We evaluate this approach on code generation tasks and joint translation of natural language and source code in Python programming language.
- Score: 30.368963500809365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical language modeling and translation with transformers have found
many successful applications in program understanding and generation tasks,
setting high benchmarks for tools in modern software development environments.
The finite context window of these neural models means, however, that they will
be unable to leverage the entire relevant context of large files and packages
for any given task. While there are many efforts to extend the context window,
we introduce an architecture-independent approach for leveraging the syntactic
hierarchies of source code for incorporating entire file-level context into a
fixed-length window. Using concrete syntax trees of each source file we extract
syntactic hierarchies and integrate them into context window by selectively
removing from view more specific, less relevant scopes for a given task. We
evaluate this approach on code generation tasks and joint translation of
natural language and source code in Python programming language, achieving a
new state-of-the-art in code completion and summarization for Python in the
CodeXGLUE benchmark. We also introduce new CodeXGLUE benchmarks for
user-experience-motivated tasks: code completion with normalized literals,
method body completion/code summarization conditioned on file-level context.
Related papers
- The Compressor-Retriever Architecture for Language Model OS [20.56093501980724]
This paper explores the concept of using a language model as the core component of an operating system (OS)
A key challenge in realizing such an LM OS is managing the life-long context and ensuring statefulness across sessions.
We introduce compressor-retriever, a model-agnostic architecture designed for life-long context management.
arXiv Detail & Related papers (2024-09-02T23:28:15Z) - Long Code Arena: a Set of Benchmarks for Long-Context Code Models [75.70507534322336]
Long Code Arena is a suite of six benchmarks for code processing tasks that require project-wide context.
These tasks cover different aspects of code processing: library-based code generation, CI builds repair, project-level code completion, commit message generation, bug localization, and module summarization.
For each task, we provide a manually verified dataset for testing, an evaluation suite, and open-source baseline solutions.
arXiv Detail & Related papers (2024-06-17T14:58:29Z) - SparseCoder: Identifier-Aware Sparse Transformer for File-Level Code
Summarization [51.67317895094664]
This paper studies file-level code summarization, which can assist programmers in understanding and maintaining large source code projects.
We propose SparseCoder, an identifier-aware sparse transformer for effectively handling long code sequences.
arXiv Detail & Related papers (2024-01-26T09:23:27Z) - Outline, Then Details: Syntactically Guided Coarse-To-Fine Code
Generation [61.50286000143233]
ChainCoder is a program synthesis language model that generates Python code progressively.
A tailored transformer architecture is leveraged to jointly encode the natural language descriptions and syntactically aligned I/O data samples.
arXiv Detail & Related papers (2023-04-28T01:47:09Z) - CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file
Context [82.88371379927112]
We propose a framework that incorporates cross-file context to learn the in-file and cross-file context jointly on top of pretrained code LMs.
CoCoMIC successfully improves the existing code LM with a 33.94% relative increase in exact match and a 28.69% relative increase in identifier matching for code completion when the cross-file context is provided.
arXiv Detail & Related papers (2022-12-20T05:48:09Z) - Python Code Generation by Asking Clarification Questions [57.63906360576212]
In this work, we introduce a novel and more realistic setup for this task.
We hypothesize that the under-specification of a natural language description can be resolved by asking clarification questions.
We collect and introduce a new dataset named CodeClarQA containing pairs of natural language descriptions and code with created synthetic clarification questions and answers.
arXiv Detail & Related papers (2022-12-19T22:08:36Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.