RepoFusion: Training Code Models to Understand Your Repository
- URL: http://arxiv.org/abs/2306.10998v1
- Date: Mon, 19 Jun 2023 15:05:31 GMT
- Title: RepoFusion: Training Code Models to Understand Your Repository
- Authors: Disha Shrivastava, Denis Kocetkov, Harm de Vries, Dzmitry Bahdanau,
Torsten Scholak
- Abstract summary: Large Language Models (LLMs) in coding assistants like GitHub Copilot struggle to understand the context present in the repository.
Recent work has shown the promise of using context from the repository during inference.
We propose RepoFusion, a framework to train models to incorporate relevant repository context.
- Score: 12.621282610983592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the huge success of Large Language Models (LLMs) in coding assistants
like GitHub Copilot, these models struggle to understand the context present in
the repository (e.g., imports, parent classes, files with similar names, etc.),
thereby producing inaccurate code completions. This effect is more pronounced
when using these assistants for repositories that the model has not seen during
training, such as proprietary software or work-in-progress code projects.
Recent work has shown the promise of using context from the repository during
inference. In this work, we extend this idea and propose RepoFusion, a
framework to train models to incorporate relevant repository context.
Experiments on single-line code completion show that our models trained with
repository context significantly outperform much larger code models as
CodeGen-16B-multi ($\sim73\times$ larger) and closely match the performance of
the $\sim 70\times$ larger StarCoderBase model that was trained with the
Fill-in-the-Middle objective. We find these results to be a novel and
compelling demonstration of the gains that training with repository context can
bring. We carry out extensive ablation studies to investigate the impact of
design choices such as context type, number of contexts, context length, and
initialization within our framework. Lastly, we release Stack-Repo, a dataset
of 200 Java repositories with permissive licenses and near-deduplicated files
that are augmented with three types of repository contexts. Additionally, we
are making available the code and trained checkpoints for our work. Our
released resources can be found at \url{https://huggingface.co/RepoFusion}.
Related papers
- CodeRAG-Bench: Can Retrieval Augment Code Generation? [78.37076502395699]
We conduct a systematic, large-scale analysis of code generation using retrieval-augmented generation.
We first curate a comprehensive evaluation benchmark, CodeRAG-Bench, encompassing three categories of code generation tasks.
We examine top-performing models on CodeRAG-Bench by providing contexts retrieved from one or multiple sources.
arXiv Detail & Related papers (2024-06-20T16:59:52Z) - Long Code Arena: a Set of Benchmarks for Long-Context Code Models [75.70507534322336]
Long Code Arena is a suite of six benchmarks for code processing tasks that require project-wide context.
These tasks cover different aspects of code processing: library-based code generation, CI builds repair, project-level code completion, commit message generation, bug localization, and module summarization.
For each task, we provide a manually verified dataset for testing, an evaluation suite, and open-source baseline solutions.
arXiv Detail & Related papers (2024-06-17T14:58:29Z) - On the Impacts of Contexts on Repository-Level Code Generation [5.641402231731082]
We present textbfmethodnamews, a novel benchmark designed to evaluate repository-level code generation.
We focus on three key aspects: executability, functional correctness through comprehensive test case generation, and accurate utilization of cross-file contexts.
arXiv Detail & Related papers (2024-06-17T10:45:22Z) - On The Importance of Reasoning for Context Retrieval in Repository-Level Code Editing [82.96523584351314]
We decouple the task of context retrieval from the other components of the repository-level code editing pipelines.
We conclude that while the reasoning helps to improve the precision of the gathered context, it still lacks the ability to identify its sufficiency.
arXiv Detail & Related papers (2024-06-06T19:44:17Z) - Class-Level Code Generation from Natural Language Using Iterative, Tool-Enhanced Reasoning over Repository [4.767858874370881]
We introduce RepoClassBench, a benchmark designed to rigorously evaluate LLMs in generating class-level code within real-world repositories.
RepoClassBench includes "Natural Language to Class generation" tasks across Java, Python & C# from a selection of repositories.
We introduce Retrieve-Repotools-Reflect (RRR), a novel approach that equips LLMs with static analysis tools to iteratively navigate & reason about repository-level context.
arXiv Detail & Related papers (2024-04-22T03:52:54Z) - RepoHyper: Search-Expand-Refine on Semantic Graphs for Repository-Level Code Completion [12.173834895070827]
tool is a framework designed to address the complex challenges associated with repository-level code completion.
Central to tool is the em Repo-level Semantic Graph (RSG), a novel semantic graph structure that encapsulates the vast context of code repositories.
Our evaluations show that tool markedly outperforms existing techniques in repository-level code completion.
arXiv Detail & Related papers (2024-03-10T05:10:34Z) - StarCoder 2 and The Stack v2: The Next Generation [105.93298676368798]
We train StarCoder2 models with 3B, 7B, and 15B parameters on 3.3 to 4.3 trillion tokens.
We thoroughly evaluate them on a comprehensive set of Code LLM benchmarks.
Our large model, StarCoder2- 15B, significantly outperforms other models of comparable size.
arXiv Detail & Related papers (2024-02-29T13:53:35Z) - RepoCoder: Repository-Level Code Completion Through Iterative Retrieval
and Generation [96.75695811963242]
RepoCoder is a framework to streamline the repository-level code completion process.
It incorporates a similarity-based retriever and a pre-trained code language model.
It consistently outperforms the vanilla retrieval-augmented code completion approach.
arXiv Detail & Related papers (2023-03-22T13:54:46Z) - Repository-Level Prompt Generation for Large Language Models of Code [28.98699307030983]
We propose a framework that learns to generate example-specific prompts using prompt proposals.
The prompt proposals take context from the entire repository.
We conduct experiments on the task of single-line code-autocompletion using code repositories taken from Google Code archives.
arXiv Detail & Related papers (2022-06-26T10:51:25Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.