Test Code Refactoring Unveiled: Where and How Does It Affect Test Code
Quality and Effectiveness?
- URL: http://arxiv.org/abs/2308.09547v1
- Date: Fri, 18 Aug 2023 13:25:53 GMT
- Title: Test Code Refactoring Unveiled: Where and How Does It Affect Test Code
Quality and Effectiveness?
- Authors: Luana Martins, Valeria Pontillo, Heitor Costa, Filomena Ferrucci,
Fabio Palomba, Ivan Machado
- Abstract summary: Refactoring has been widely investigated in the past in relation to production code quality.
There is still a lack of investigation into how developers typically test code.
- Score: 13.911747089737762
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Context. Refactoring has been widely investigated in the past in relation to
production code quality, yet still little is known on how developers apply
refactoring on test code. Specifically, there is still a lack of investigation
into how developers typically refactor test code and its effects on test code
quality and effectiveness. Objective. This paper presents a research agenda
aimed to bridge this gap of knowledge by investigating (1) whether test
refactoring actually targets test classes affected by quality and effectiveness
concerns and (2) the extent to which refactoring contributes to the improvement
of test code quality and effectiveness. Method. We plan to conduct an
exploratory mining software repository study to collect test refactoring data
of open-source Java projects from GitHub and statistically analyze them in
combination with quality metrics, test smells, and code/mutation coverage
indicators. Furthermore, we will measure how refactoring operations impact the
quality and effectiveness of test code.
Related papers
- Understanding Code Understandability Improvements in Code Reviews [79.16476505761582]
We analyzed 2,401 code review comments from Java open-source projects on GitHub.
83.9% of suggestions for improvement were accepted and integrated, with fewer than 1% later reverted.
arXiv Detail & Related papers (2024-10-29T12:21:23Z) - Investigating Student Reasoning in Method-Level Code Refactoring: A Think-Aloud Study [0.7120027021375674]
Code and code quality are core topics in software engineering education.
Students often produce code with persistent quality issues.
Students were able to remove code quality issues in most cases.
arXiv Detail & Related papers (2024-10-28T09:50:16Z) - Context-Enhanced LLM-Based Framework for Automatic Test Refactoring [10.847400457238423]
Test smells arise from poor design practices and insufficient domain knowledge.
We propose UTRefactor, a context-enhanced, LLM-based framework for automatic test in Java projects.
We evaluate UTRefactor on 879 tests from six open-source Java projects, reducing the number of test smells from 2,375 to 265, achieving an 89% reduction.
arXiv Detail & Related papers (2024-09-25T08:42:29Z) - DOCE: Finding the Sweet Spot for Execution-Based Code Generation [69.5305729627198]
We propose a comprehensive framework that includes candidate generation, $n$-best reranking, minimum Bayes risk (MBR) decoding, and self-ging as the core components.
Our findings highlight the importance of execution-based methods and the difference gap between execution-based and execution-free methods.
arXiv Detail & Related papers (2024-08-25T07:10:36Z) - Code Agents are State of the Art Software Testers [10.730852617039451]
We investigate the capability of LLM-based Code Agents for formalizing user issues into test cases.
We propose a novel benchmark based on popular GitHub repositories, containing real-world issues, ground-truth patches, and golden tests.
We find that LLMs generally perform surprisingly well at generating relevant test cases with Code Agents designed for code repair.
arXiv Detail & Related papers (2024-06-18T14:54:37Z) - Leveraging Large Language Models for Efficient Failure Analysis in Game Development [47.618236610219554]
This paper proposes a new approach to automatically identify which change in the code caused a test to fail.
The method leverages Large Language Models (LLMs) to associate error messages with the corresponding code changes causing the failure.
Our approach reaches an accuracy of 71% in our newly created dataset, which comprises issues reported by developers at EA over a period of one year.
arXiv Detail & Related papers (2024-06-11T09:21:50Z) - Software Testing and Code Refactoring: A Survey with Practitioners [3.977213079821398]
This study aims to explore how software testing professionals deal with code to understand the benefits and limitations of this practice in the context of software testing.
We concluded that in the context of software testing, offers several benefits, such as supporting the maintenance of automated tests and improving the performance of the testing team.
Our study raises discussions on the importance of having testing professionals implement in the code of automated tests, allowing them to improve their coding abilities.
arXiv Detail & Related papers (2023-10-03T01:07:39Z) - Do code refactorings influence the merge effort? [80.1936417993664]
Multiple contributors frequently change the source code in parallel to implement new features, fix bugs, existing code, and make other changes.
These simultaneous changes need to be merged into the same version of the source code.
Studies show that 10 to 20 percent of all merge attempts result in conflicts, which require the manual developer's intervention to complete the process.
arXiv Detail & Related papers (2023-05-10T13:24:59Z) - CodeT: Code Generation with Generated Tests [49.622590050797236]
We explore the use of pre-trained language models to automatically generate test cases.
CodeT executes the code solutions using the generated test cases, and then chooses the best solution.
We evaluate CodeT on five different pre-trained models with both HumanEval and MBPP benchmarks.
arXiv Detail & Related papers (2022-07-21T10:18:37Z) - How We Refactor and How We Document it? On the Use of Supervised Machine
Learning Algorithms to Classify Refactoring Documentation [25.626914797750487]
Refactoring is the art of improving the design of a system without altering its external behavior.
This study categorizes commits into 3 categories, namely, Internal QA, External QA, and Code Smell Resolution, along with the traditional BugFix and Functional categories.
To better understand our classification results, we analyzed commit messages to extract patterns that developers regularly use to describe their smells.
arXiv Detail & Related papers (2020-10-26T20:33:17Z) - Dynamic Causal Effects Evaluation in A/B Testing with a Reinforcement
Learning Framework [68.96770035057716]
A/B testing is a business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries.
This paper introduces a reinforcement learning framework for carrying A/B testing in online experiments.
arXiv Detail & Related papers (2020-02-05T10:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.