All You Need Is Logs: Improving Code Completion by Learning from
Anonymous IDE Usage Logs
- URL: http://arxiv.org/abs/2205.10692v1
- Date: Sat, 21 May 2022 23:21:26 GMT
- Title: All You Need Is Logs: Improving Code Completion by Learning from
Anonymous IDE Usage Logs
- Authors: Vitaliy Bibaev, Alexey Kalina, Vadim Lomshakov, Yaroslav Golubev,
Alexander Bezzubov, Nikita Povarov, Timofey Bryksin
- Abstract summary: We propose an approach for collecting completion usage logs from the users in an IDE.
We use them to train a machine learning based model for ranking completion candidates.
Our evaluation shows that using a simple ranking model trained on the past user behavior logs significantly improved code completion experience.
- Score: 55.606644084003094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Integrated Development Environments (IDE) are designed to make users more
productive, as well as to make their work more comfortable. To achieve this, a
lot of diverse tools are embedded into IDEs, and the developers of IDEs can
employ anonymous usage logs to collect the data about how they are being used
to improve them. A particularly important component that this can be applied to
is code completion, since improving code completion using statistical learning
techniques is a well-established research area.
In this work, we propose an approach for collecting completion usage logs
from the users in an IDE and using them to train a machine learning based model
for ranking completion candidates. We developed a set of features that describe
completion candidates and their context, and deployed their anonymized
collection in the Early Access Program of IntelliJ-based IDEs. We used the logs
to collect a dataset of code completions from users, and employed it to train a
ranking CatBoost model. Then, we evaluated it in two settings: on a held-out
set of the collected completions and in a separate A/B test on two different
groups of users in the IDE. Our evaluation shows that using a simple ranking
model trained on the past user behavior logs significantly improved code
completion experience. Compared to the default heuristics-based ranking, our
model demonstrated a decrease in the number of typing actions necessary to
perform the completion in the IDE from 2.073 to 1.832.
The approach adheres to privacy requirements and legal constraints, since it
does not require collecting personal information, performing all the necessary
anonymization on the client's side. Importantly, it can be improved
continuously: implementing new features, collecting new data, and evaluating
new models - this way, we have been using it in production since the end of
2020.
Related papers
- Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion? [60.84912551069379]
We present the Code-Development Benchmark (Codev-Bench), a fine-grained, real-world, repository-level, and developer-centric evaluation framework.
Codev-Agent is an agent-based system that automates repository crawling, constructs execution environments, extracts dynamic calling chains from existing unit tests, and generates new test samples to avoid data leakage.
arXiv Detail & Related papers (2024-10-02T09:11:10Z) - Full Line Code Completion: Bringing AI to Desktop [3.5296482958373447]
We describe our approach for building a multi-token code completion feature for the JetBrains' IntelliJ Platform.
The feature suggests only syntactically correct code and works fully locally, i.e., data querying and the generation of suggestions happens on the end user's machine.
arXiv Detail & Related papers (2024-05-14T15:42:55Z) - Does Your Neural Code Completion Model Use My Code? A Membership Inference Approach [66.51005288743153]
We investigate the legal and ethical issues of current neural code completion models.
We tailor a membership inference approach (termed CodeMI) that was originally crafted for classification tasks.
We evaluate the effectiveness of this adapted approach across a diverse array of neural code completion models.
arXiv Detail & Related papers (2024-04-22T15:54:53Z) - How far are AI-powered programming assistants from meeting developers' needs? [17.77734978425295]
In-IDE AI coding assistant tools (ACATs) like GitHub Copilot have significantly impacted developers' coding habits.
We simulate real development scenarios and recruit 27 computer science students to investigate their behavior with three popular ACATs.
We find that ACATs generally enhance task completion rates, reduce time, improve code quality, and increase self-perceived productivity.
arXiv Detail & Related papers (2024-04-18T08:51:14Z) - Context Composing for Full Line Code Completion [0.46040036610482665]
The paper describes our approach to context composing for the Transformer model that is a core of the feature's implementation.
We share our next steps to improve the feature and emphasize the importance of several research aspects in the area.
arXiv Detail & Related papers (2024-02-14T15:17:37Z) - Enriching Source Code with Contextual Data for Code Completion Models:
An Empirical Study [4.438873396405334]
We aim to answer whether making code easier to understand through using contextual data improves the performance of pre-trained code language models for the task of code completion.
For comments, we find that the models perform better in the presence of multi-line comments.
arXiv Detail & Related papers (2023-04-24T17:09:14Z) - GEMv2: Multilingual NLG Benchmarking in a Single Line of Code [161.1761414080574]
Generation, Evaluation, and Metrics Benchmark introduces a modular infrastructure for dataset, model, and metric developers.
GEMv2 supports 40 documented datasets in 51 languages.
Models for all datasets can be evaluated online and our interactive data card creation and rendering tools make it easier to add new datasets to the living benchmark.
arXiv Detail & Related papers (2022-06-22T17:52:30Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z) - SLADE: A Self-Training Framework For Distance Metric Learning [75.54078592084217]
We present a self-training framework, SLADE, to improve retrieval performance by leveraging additional unlabeled data.
We first train a teacher model on the labeled data and use it to generate pseudo labels for the unlabeled data.
We then train a student model on both labels and pseudo labels to generate final feature embeddings.
arXiv Detail & Related papers (2020-11-20T08:26:10Z) - Towards Full-line Code Completion with Neural Language Models [25.458883198815393]
We discuss the probability of directly completing a whole line of code instead of a single token.
Recent neural language models have been adopted as a preferred approach for code completion.
arXiv Detail & Related papers (2020-09-18T03:12:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.