Private-Library-Oriented Code Generation with Large Language Models
- URL: http://arxiv.org/abs/2307.15370v1
- Date: Fri, 28 Jul 2023 07:43:13 GMT
- Title: Private-Library-Oriented Code Generation with Large Language Models
- Authors: Daoguang Zan, Bei Chen, Yongshun Gong, Junzhi Cao, Fengji Zhang,
Bingchao Wu, Bei Guan, Yilong Yin, Yongji Wang
- Abstract summary: This paper focuses on utilizing large language models (LLMs) for code generation in private libraries.
We propose a novel framework that emulates the process of programmers writing private code.
We create four private library benchmarks, including TorchDataEval, TorchDataComplexEval, MonkeyEval, and BeatNumEval.
- Score: 52.73999698194344
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Large language models (LLMs), such as Codex and GPT-4, have recently
showcased their remarkable code generation abilities, facilitating a
significant boost in coding efficiency. This paper will delve into utilizing
LLMs for code generation in private libraries, as they are widely employed in
everyday programming. Despite their remarkable capabilities, generating such
private APIs poses a formidable conundrum for LLMs, as they inherently lack
exposure to these private libraries during pre-training. To address this
challenge, we propose a novel framework that emulates the process of
programmers writing private code. This framework comprises two modules:
APIFinder first retrieves potentially useful APIs from API documentation; and
APICoder then leverages these retrieved APIs to generate private code.
Specifically, APIFinder employs vector retrieval techniques and allows user
involvement in the retrieval process. For APICoder, it can directly utilize
off-the-shelf code generation models. To further cultivate explicit proficiency
in invoking APIs from prompts, we continuously pre-train a reinforced version
of APICoder, named CodeGenAPI. Our goal is to train the above two modules on
vast public libraries, enabling generalization to private ones. Meanwhile, we
create four private library benchmarks, including TorchDataEval,
TorchDataComplexEval, MonkeyEval, and BeatNumEval, and meticulously handcraft
test cases for each benchmark to support comprehensive evaluations. Numerous
experiments on the four benchmarks consistently affirm the effectiveness of our
approach. Furthermore, deeper analysis is also conducted to glean additional
insights.
Related papers
- OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models [70.72097493954067]
Large language models (LLMs) for code have become indispensable in various domains, including code generation, reasoning, tasks and agent systems.
We introduce OpenCoder, a top-tier code LLM that not only achieves performance comparable to leading models but also serves as an open cookbook'' for the research community.
arXiv Detail & Related papers (2024-11-07T17:47:25Z) - A Comprehensive Framework for Evaluating API-oriented Code Generation in Large Language Models [14.665460257371164]
Large language models (LLMs) like GitHub Copilot and ChatGPT have emerged as powerful tools for code generation.
We propose AutoAPIEval, a framework designed to evaluate the capabilities of LLMs in API-oriented code generation.
arXiv Detail & Related papers (2024-09-23T17:22:09Z) - A Systematic Evaluation of Large Code Models in API Suggestion: When, Which, and How [53.65636914757381]
API suggestion is a critical task in modern software development.
Recent advancements in large code models (LCMs) have shown promise in the API suggestion task.
arXiv Detail & Related papers (2024-09-20T03:12:35Z) - VersiCode: Towards Version-controllable Code Generation [58.82709231906735]
Large Language Models (LLMs) have made tremendous strides in code generation, but existing research fails to account for the dynamic nature of software development.
We propose two novel tasks aimed at bridging this gap: version-specific code completion (VSCC) and version-aware code migration (VACM)
We conduct an extensive evaluation on VersiCode, which reveals that version-controllable code generation is indeed a significant challenge.
arXiv Detail & Related papers (2024-06-11T16:15:06Z) - Are Human Rules Necessary? Generating Reusable APIs with CoT Reasoning and In-Context Learning [14.351476383642016]
We propose a novel approach, named Code2API, to automatically perform APIzation for Stack Overflow code snippets.
Code2API does not require additional model training or any manual crafting rules.
It can be easily deployed on personal computers without relying on other external tools.
arXiv Detail & Related papers (2024-05-06T14:22:17Z) - Differentially Private Synthetic Data via Foundation Model APIs 2: Text [56.13240830670327]
A lot of high-quality text data generated in the real world is private and cannot be shared or used freely due to privacy concerns.
We propose an augmented PE algorithm, named Aug-PE, that applies to the complex setting of text.
Our results demonstrate that Aug-PE produces DP synthetic text that yields competitive utility with the SOTA DP finetuning baselines.
arXiv Detail & Related papers (2024-03-04T05:57:50Z) - Compositional API Recommendation for Library-Oriented Code Generation [23.355509276291198]
We propose CAPIR, which adopts a "divide-and-conquer" strategy to recommend APIs for coarse-grained requirements.
We present two challenging benchmarks, RAPID (Recommend APIs based on Documentation) and LOCG (Library-Oriented Code Generation)
Experimental results on these benchmarks, demonstrate the effectiveness of CAPIR in comparison to existing baselines.
arXiv Detail & Related papers (2024-02-29T18:27:27Z) - Evaluating Embedding APIs for Information Retrieval [51.24236853841468]
We evaluate the capabilities of existing semantic embedding APIs on domain generalization and multilingual retrieval.
We find that re-ranking BM25 results using the APIs is a budget-friendly approach and is most effective in English.
For non-English retrieval, re-ranking still improves the results, but a hybrid model with BM25 works best, albeit at a higher cost.
arXiv Detail & Related papers (2023-05-10T16:40:52Z) - When Language Model Meets Private Library [25.610036042971043]
In practice, it is common for programmers to write code using private libraries.
This is a challenge for language models since they have never seen private APIs during training.
We propose a novel framework with two modules: the APIRetriever finds useful APIs, and then the APICoder generates code using these APIs.
arXiv Detail & Related papers (2022-10-31T11:42:06Z) - On the Effectiveness of Pretrained Models for API Learning [8.788509467038743]
Developers frequently use APIs to implement certain functionalities, such as parsing Excel Files, reading and writing text files line by line, etc.
Developers can greatly benefit from automatic API usage sequence generation based on natural language queries for building applications in a faster and cleaner manner.
Existing approaches utilize information retrieval models to search for matching API sequences given a query or use RNN-based encoder-decoder to generate API sequences.
arXiv Detail & Related papers (2022-04-05T20:33:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.