ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review
Quality Estimation
- URL: http://arxiv.org/abs/2307.03996v1
- Date: Sat, 8 Jul 2023 15:37:48 GMT
- Title: ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review
Quality Estimation
- Authors: Saifullah Mahbub, Md. Easin Arafat, Chowdhury Rafeed Rahman, Zannatul
Ferdows, Masum Hasan
- Abstract summary: Inspection of review process effectiveness and continuous improvement can boost development productivity.
We propose a semi-supervised learning based system ReviewRanker which is aimed at assigning each code review a confidence score.
Our proposed method is trained based on simple and and well defined labels provided by developers.
- Score: 0.6895577977557867
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Code review is considered a key process in the software industry for
minimizing bugs and improving code quality. Inspection of review process
effectiveness and continuous improvement can boost development productivity.
Such inspection is a time-consuming and human-bias-prone task. We propose a
semi-supervised learning based system ReviewRanker which is aimed at assigning
each code review a confidence score which is expected to resonate with the
quality of the review. Our proposed method is trained based on simple and and
well defined labels provided by developers. The labeling task requires little
to no effort from the developers and has an indirect relation to the end goal
(assignment of review confidence score). ReviewRanker is expected to improve
industry-wide code review quality inspection through reducing human bias and
effort required for such task. The system has the potential of minimizing the
back-and-forth cycle existing in the development and review process. Usable
code and dataset for this research can be found at:
https://github.com/saifarnab/code_review
Related papers
- Understanding Code Understandability Improvements in Code Reviews [79.16476505761582]
We analyzed 2,401 code review comments from Java open-source projects on GitHub.
83.9% of suggestions for improvement were accepted and integrated, with fewer than 1% later reverted.
arXiv Detail & Related papers (2024-10-29T12:21:23Z) - Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion? [60.84912551069379]
We present the Code-Development Benchmark (Codev-Bench), a fine-grained, real-world, repository-level, and developer-centric evaluation framework.
Codev-Agent is an agent-based system that automates repository crawling, constructs execution environments, extracts dynamic calling chains from existing unit tests, and generates new test samples to avoid data leakage.
arXiv Detail & Related papers (2024-10-02T09:11:10Z) - Sifting through the Chaff: On Utilizing Execution Feedback for Ranking the Generated Code Candidates [46.74037090843497]
Large Language Models (LLMs) are transforming the way developers approach programming by automatically generating code based on natural language descriptions.
This paper puts forward RankEF, an innovative approach for code ranking that leverages execution feedback.
Experiments on three code generation benchmarks demonstrate that RankEF significantly outperforms the state-of-the-art CodeRanker.
arXiv Detail & Related papers (2024-08-26T01:48:57Z) - DOCE: Finding the Sweet Spot for Execution-Based Code Generation [69.5305729627198]
We propose a comprehensive framework that includes candidate generation, $n$-best reranking, minimum Bayes risk (MBR) decoding, and self-ging as the core components.
Our findings highlight the importance of execution-based methods and the difference gap between execution-based and execution-free methods.
arXiv Detail & Related papers (2024-08-25T07:10:36Z) - A GPT-based Code Review System for Programming Language Learning [0.0]
This research proposes a system that employs GPT-4 to offer learner-friendly code reviews and minimize the risk of AI-assist cheating.
The improved system underwent evaluation by software education experts based on four criteria: strict code correctness checks, response time, lower API call costs, and the quality of code reviews.
arXiv Detail & Related papers (2024-06-21T12:16:01Z) - Improving Automated Code Reviews: Learning from Experience [12.573740138977065]
This study investigates whether higher-quality reviews can be generated from automated code review models.
We find that experience-aware oversampling can increase the correctness, level of information, and meaningfulness of reviews.
arXiv Detail & Related papers (2024-02-06T07:48:22Z) - Improving the Learning of Code Review Successive Tasks with Cross-Task
Knowledge Distillation [1.0878040851638]
We introduce a novel deep-learning architecture, named DISCOREV, which employs cross-task knowledge distillation to address these tasks simultaneously.
We show that our approach generates better review comments, as measured by the BLEU score, as well as more accurate code refinement according to the CodeBLEU score.
arXiv Detail & Related papers (2024-02-03T07:02:22Z) - CodeReviewer: Pre-Training for Automating Code Review Activities [36.40557768557425]
This research focuses on utilizing pre-training techniques for the tasks in the code review scenario.
We collect a large-scale dataset of real world code changes and code reviews from open-source projects in nine of the most popular programming languages.
To better understand code diffs and reviews, we propose CodeReviewer, a pre-trained model that utilizes four pre-training tasks tailored specifically for the code review senario.
arXiv Detail & Related papers (2022-03-17T05:40:13Z) - Predicting Code Review Completion Time in Modern Code Review [12.696276129130332]
Modern Code Review (MCR) is being adopted in both open source and commercial projects as a common practice.
Code reviews can experience significant delays to be completed due to various socio-technical factors.
There is a lack of tool support to help developers estimating the time required to complete a code review.
arXiv Detail & Related papers (2021-09-30T14:00:56Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Code Review in the Classroom [57.300604527924015]
Young developers in a classroom setting provide a clear picture of the potential favourable and problematic areas of the code review process.
Their feedback suggests that the process has been well received with some points to better the process.
This paper can be used as guidelines to perform code reviews in the classroom.
arXiv Detail & Related papers (2020-04-19T06:07:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.