Who Made This Copy? An Empirical Analysis of Code Clone Authorship
- URL: http://arxiv.org/abs/2309.01116v1
- Date: Sun, 3 Sep 2023 08:24:32 GMT
- Title: Who Made This Copy? An Empirical Analysis of Code Clone Authorship
- Authors: Reishi Yokomori and Katsuro Inoue
- Abstract summary: We analyzed the authorship of code clones at the line-level granularity for Java files in 153 Apache projects stored on GitHub.
We found that there are a substantial number of clone lines across all projects.
One-third of clone sets are primarily contributed to by multiple leading authors.
- Score: 1.1512593234650217
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Code clones are code snippets that are identical or similar to other snippets
within the same or different files. They are often created through
copy-and-paste practices during development and maintenance activities. Since
code clones may require consistent updates and coherent management, they
present a challenging issue in software maintenance. Therefore, many studies
have been conducted to find various types of clones with accuracy, scalability,
or performance. However, the exploration of the nature of code clones has been
limited. Even the fundamental question of whether code snippets in the same
clone set were written by the same author or different authors has not been
thoroughly investigated.
In this paper, we investigate the characteristics of code clones with a focus
on authorship. We analyzed the authorship of code clones at the line-level
granularity for Java files in 153 Apache projects stored on GitHub and
addressed three research questions.
Based on these research questions, we found that there are a substantial
number of clone lines across all projects (an average of 18.5\% for all
projects). Furthermore, authors who contribute to many non-clone lines also
contribute to many clone lines. Additionally, we found that one-third of clone
sets are primarily contributed to by multiple leading authors.
These results confirm our intuitive understanding of clone characteristics,
although no previous publications have provided empirical validation data from
multiple projects. As the results could assist in designing better clone
management techniques, we will explore the implications of developing an
effective clone management tool.
Related papers
- An Empirical Analysis of Git Commit Logs for Potential Inconsistency in Code Clones [0.9745141082552166]
We analyzed 45 repositories owned by the Apache Software Foundation on GitHub.
On average, clone snippets are changed infrequently, typically only two or three times throughout their lifetime.
The ratio of co-changes is about half of all clone changes.
arXiv Detail & Related papers (2024-09-13T06:14:50Z) - VersiCode: Towards Version-controllable Code Generation [58.82709231906735]
Large Language Models (LLMs) have made tremendous strides in code generation, but existing research fails to account for the dynamic nature of software development.
We propose two novel tasks aimed at bridging this gap: version-specific code completion (VSCC) and version-aware code migration (VACM)
We conduct an extensive evaluation on VersiCode, which reveals that version-controllable code generation is indeed a significant challenge.
arXiv Detail & Related papers (2024-06-11T16:15:06Z) - CC2Vec: Combining Typed Tokens with Contrastive Learning for Effective Code Clone Detection [20.729032739935132]
CC2Vec is a novel code encoding method designed to swiftly identify simple code clones.
We evaluate CC2Vec on two widely used datasets (i.e., BigCloneBench and Google Code Jam)
arXiv Detail & Related papers (2024-05-01T10:18:31Z) - Unraveling Code Clone Dynamics in Deep Learning Frameworks [0.7285835869818668]
Deep Learning (DL) frameworks play a critical role in advancing artificial intelligence, and their rapid growth underscores the need for a comprehensive understanding of software quality and maintainability.
Code clones refer to identical or highly similar source code fragments within the same project or even across different projects.
We empirically analyze code clones in nine popular DL frameworks, i.e. Paddle, PyTorch, Aesara, Ray, MXNet, Keras, Jax and BentoML.
arXiv Detail & Related papers (2024-04-25T21:12:35Z) - SparseCoder: Identifier-Aware Sparse Transformer for File-Level Code
Summarization [51.67317895094664]
This paper studies file-level code summarization, which can assist programmers in understanding and maintaining large source code projects.
We propose SparseCoder, an identifier-aware sparse transformer for effectively handling long code sequences.
arXiv Detail & Related papers (2024-01-26T09:23:27Z) - ZC3: Zero-Shot Cross-Language Code Clone Detection [79.53514630357876]
We propose a novel method named ZC3 for Zero-shot Cross-language Code Clone detection.
ZC3 designs the contrastive snippet prediction to form an isomorphic representation space among different programming languages.
Based on this, ZC3 exploits domain-aware learning and cycle consistency learning to generate representations that are aligned among different languages are diacritical for different types of clones.
arXiv Detail & Related papers (2023-08-26T03:48:10Z) - Towards Understanding the Capability of Large Language Models on Code
Clone Detection: A Survey [40.99060616674878]
Large language models (LLMs) possess diverse code-related knowledge, making them versatile for various software engineering challenges.
This paper provides the first comprehensive evaluation of LLMs for clone detection, covering different clone types, languages, and prompts.
We find advanced LLMs excel in detecting complex semantic clones, surpassing existing methods.
arXiv Detail & Related papers (2023-08-02T14:56:01Z) - CONCORD: Clone-aware Contrastive Learning for Source Code [64.51161487524436]
Self-supervised pre-training has gained traction for learning generic code representations valuable for many downstream SE tasks.
We argue that it is also essential to factor in how developers code day-to-day for general-purpose representation learning.
In particular, we propose CONCORD, a self-supervised, contrastive learning strategy to place benign clones closer in the representation space while moving deviants further apart.
arXiv Detail & Related papers (2023-06-05T20:39:08Z) - RepoCoder: Repository-Level Code Completion Through Iterative Retrieval
and Generation [96.75695811963242]
RepoCoder is a framework to streamline the repository-level code completion process.
It incorporates a similarity-based retriever and a pre-trained code language model.
It consistently outperforms the vanilla retrieval-augmented code completion approach.
arXiv Detail & Related papers (2023-03-22T13:54:46Z) - Evaluation of Contrastive Learning with Various Code Representations for
Code Clone Detection [3.699097874146491]
We evaluate contrastive learning for detecting semantic clones of code snippets.
We use CodeTransformator to create a dataset that mimics plagiarised code based on competitive programming solutions.
The results of our evaluation show that proposed models perform diversely in each task, however the performance of the graph-based models is generally above the others.
arXiv Detail & Related papers (2022-06-17T12:25:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.