Measuring the effectiveness of code review comments in GitHub repositories: A machine learning approach
- URL: http://arxiv.org/abs/2508.16053v1
- Date: Fri, 22 Aug 2025 03:00:48 GMT
- Title: Measuring the effectiveness of code review comments in GitHub repositories: A machine learning approach
- Authors: Shadikur Rahman, Umme Ayman Koana, Hasibul Karim Shanto, Mahmuda Akter, Chitra Roy, Aras M. Ismael,
- Abstract summary: This paper illustrates an empirical study of the working efficiency of machine learning techniques in classifying code review text by semantic meaning.<n>We manually labelled 13557 code review comments generated by three open source projects in GitHub during existing year.<n>In order to recognize the sentiment polarity (or sentiment orientation) of code reviews, we use seven machine learning algorithms and compare those results to find the better ones.
- Score: 0.969054772470341
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper illustrates an empirical study of the working efficiency of machine learning techniques in classifying code review text by semantic meaning. The code review comments from the source control repository in GitHub were extracted for development activity from the existing year for three open-source projects. Apart from that, programmers need to be aware of their code and point out their errors. In that case, it is a must to classify the sentiment polarity of the code review comments to avoid an error. We manually labelled 13557 code review comments generated by three open source projects in GitHub during the existing year. In order to recognize the sentiment polarity (or sentiment orientation) of code reviews, we use seven machine learning algorithms and compare those results to find the better ones. Among those Linear Support Vector Classifier(SVC) classifier technique achieves higher accuracy than others. This study will help programmers to make any solution based on code reviews by avoiding misconceptions.
Related papers
- Does AI Code Review Lead to Code Changes? A Case Study of GitHub Actions [21.347559936084807]
AI-based code review tools automatically review and comment on pull requests to improve code quality.<n>We present a large-scale empirical study of 16 popular AI-based code review actions for GitHub.<n>We investigate how these tools are adopted and configured, whether their comments lead to code changes, and which factors influence their effectiveness.
arXiv Detail & Related papers (2025-08-26T07:55:23Z) - Leveraging Reward Models for Guiding Code Review Comment Generation [13.306560805316103]
Code review is a crucial component of modern software development, involving the evaluation of code quality, providing feedback on potential issues, and refining the code to address identified problems.<n>Deep learning techniques are able to tackle the generative aspect of code review, by commenting on a given code as a human reviewer would do.<n>In this paper, we introduce CoRAL, a deep learning framework automating review comment generation by exploiting reinforcement learning with a reward mechanism.
arXiv Detail & Related papers (2025-06-04T21:31:38Z) - Deep Learning-based Code Reviews: A Paradigm Shift or a Double-Edged Sword? [14.970843824847956]
We run a controlled experiment with 29 experts who reviewed different programs with/without the support of an automatically generated code review.<n>We show that reviewers consider valid most of the issues automatically identified by the LLM and that the availability of an automated review as a starting point strongly influences their behavior.<n>The reviewers who started from an automated review identified a higher number of low-severity issues while, however, not identifying more high-severity issues as compared to a completely manual process.
arXiv Detail & Related papers (2024-11-18T09:24:01Z) - Understanding Code Understandability Improvements in Code Reviews [79.16476505761582]
We analyzed 2,401 code review comments from Java open-source projects on GitHub.
83.9% of suggestions for improvement were accepted and integrated, with fewer than 1% later reverted.
arXiv Detail & Related papers (2024-10-29T12:21:23Z) - Are your comments outdated? Towards automatically detecting code-comment
consistency [3.204922482708544]
Outdated comments are dangerous and harmful and may mislead subsequent developers.
We propose a learning-based method, called CoCC, to detect the consistency between code and comment.
Experiment results show that CoCC can effectively detect outdated comments with precision over 90%.
arXiv Detail & Related papers (2024-03-01T03:30:13Z) - CONCORD: Clone-aware Contrastive Learning for Source Code [64.51161487524436]
Self-supervised pre-training has gained traction for learning generic code representations valuable for many downstream SE tasks.
We argue that it is also essential to factor in how developers code day-to-day for general-purpose representation learning.
In particular, we propose CONCORD, a self-supervised, contrastive learning strategy to place benign clones closer in the representation space while moving deviants further apart.
arXiv Detail & Related papers (2023-06-05T20:39:08Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z) - Measuring Coding Challenge Competence With APPS [54.22600767666257]
We introduce APPS, a benchmark for code generation.
Our benchmark includes 10,000 problems, which range from having simple one-line solutions to being substantial algorithmic challenges.
Recent models such as GPT-Neo can pass approximately 15% of the test cases of introductory problems.
arXiv Detail & Related papers (2021-05-20T17:58:42Z) - Deep Just-In-Time Inconsistency Detection Between Comments and Source
Code [51.00904399653609]
In this paper, we aim to detect whether a comment becomes inconsistent as a result of changes to the corresponding body of code.
We develop a deep-learning approach that learns to correlate a comment with code changes.
We show the usefulness of our approach by combining it with a comment update model to build a more comprehensive automatic comment maintenance system.
arXiv Detail & Related papers (2020-10-04T16:49:28Z) - Contrastive Code Representation Learning [95.86686147053958]
We show that the popular reconstruction-based BERT model is sensitive to source code edits, even when the edits preserve semantics.
We propose ContraCode: a contrastive pre-training task that learns code functionality, not form.
arXiv Detail & Related papers (2020-07-09T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.