Diagnosing Refactoring Dangers
- URL: http://arxiv.org/abs/2411.08648v1
- Date: Wed, 13 Nov 2024 14:39:37 GMT
- Title: Diagnosing Refactoring Dangers
- Authors: Wouter Brinksma, William Wernsen, Evert Verduin, Herman Hilberink, Patrick de Beer, Lex Bijlsma, Harrie Passier,
- Abstract summary: Existing behavior preservation analyses often lack comprehensive insights into rejections and do not provide actionable solutions.
We developed a conceptual model to detect dangers, and created an Eclipse plugin based upon this model, called ReFD.
ReFD evaluates a given code to identify if these potential risks are present, making them actual risks, and employs a verdict mechanism to reduce false positives.
- Score: 0.7036032466145112
- License:
- Abstract: This report investigates the relationship between software refactoring and behavior preservation. Existing behavior preservation analyses often lack comprehensive insights into refactoring rejections and do not provide actionable solutions. To address these issues, we developed a conceptual model to detect refactoring dangers, and created an Eclipse plugin based upon this model, called ReFD. Every refactoring can be partitioned in microsteps, each of which carries potential risks. ReFD evaluates a given code context to identify if these potential risks are present, making them actual risks, and employs a verdict mechanism to reduce false positives. To facilitate the risk detection, several components called detectors and subdetectors are defined, which can be reused for multiple refactorings. The tool was validated by implementing the detection for multiple refactorings, which produce the expected information about the risks detected. This information leads a developer to actively think about solutions to the problems a refactoring might cause within an actual codebase.
Related papers
- An Empirical Study on the Potential of LLMs in Automated Software Refactoring [9.157968996300417]
We investigate the potential of large language models (LLMs) in automated software.
We find that 13 out of the 176 solutions suggested by ChatGPT and 9 out of the 137 solutions suggested by Gemini were unsafe in that they either changed the functionality of the source code or introduced syntax errors.
arXiv Detail & Related papers (2024-11-07T05:35:55Z) - Deciphering Refactoring Branch Dynamics in Modern Code Review: An Empirical Study on Qt [5.516979718589074]
This study aims to understand the review process for changes in the Refactor branch and to identify what developers care about when reviewing code in this branch.
We find that reviews involving from the Refactor branch take significantly less time to resolve in terms of code review.
Additionally, documentation of developer intent is notably sparse within the Refactor branch compared to other branches.
arXiv Detail & Related papers (2024-10-07T01:18:56Z) - Controlling Risk of Retrieval-augmented Generation: A Counterfactual Prompting Framework [77.45983464131977]
We focus on how likely it is that a RAG model's prediction is incorrect, resulting in uncontrollable risks in real-world applications.
Our research identifies two critical latent factors affecting RAG's confidence in its predictions.
We develop a counterfactual prompting framework that induces the models to alter these factors and analyzes the effect on their answers.
arXiv Detail & Related papers (2024-09-24T14:52:14Z) - Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations [76.19419888353586]
Large language models (LLMs) are susceptible to a variety of risks, from non-faithful output to biased and toxic generations.
We present our efforts to create and deploy a library of detectors: compact and easy-to-build classification models that provide labels for various harms.
arXiv Detail & Related papers (2024-03-09T21:07:16Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Automating Source Code Refactoring in the Classroom [15.194527511076725]
This paper discusses the results of an experiment in the that involved carrying out various classroom activities for the purpose of removing antipatterns using Jodorant, an Eclipse plugin that supports antipatterns detection and correction.
The results of the quantitative and qualitative analysis with 171 students show that students tend to appreciate the idea of learning, and are satisfied with various aspects of the JDeodorant plugin's operation.
arXiv Detail & Related papers (2023-11-05T18:46:00Z) - State of Refactoring Adoption: Better Understanding Developer Perception
of Refactoring [5.516979718589074]
We aim to explore how developers document their activities during the software life cycle.
We call such activity Self-Affirmed Refactoring (SAR), which indicates developers' documentation of their activities.
We propose an approach to identify whether a commit describes developer-related events to classify them according to the common quality improvement categories.
arXiv Detail & Related papers (2023-06-09T16:38:20Z) - RefBERT: A Two-Stage Pre-trained Framework for Automatic Rename
Refactoring [57.8069006460087]
We study automatic rename on variable names, which is considered more challenging than other rename activities.
We propose RefBERT, a two-stage pre-trained framework for rename on variable names.
We show that the generated variable names of RefBERT are more accurate and meaningful than those produced by the existing method.
arXiv Detail & Related papers (2023-05-28T12:29:39Z) - Do code refactorings influence the merge effort? [80.1936417993664]
Multiple contributors frequently change the source code in parallel to implement new features, fix bugs, existing code, and make other changes.
These simultaneous changes need to be merged into the same version of the source code.
Studies show that 10 to 20 percent of all merge attempts result in conflicts, which require the manual developer's intervention to complete the process.
arXiv Detail & Related papers (2023-05-10T13:24:59Z) - Online Safety Property Collection and Refinement for Safe Deep
Reinforcement Learning in Mapless Navigation [79.89605349842569]
We introduce the Collection and Refinement of Online Properties (CROP) framework to design properties at training time.
CROP employs a cost signal to identify unsafe interactions and use them to shape safety properties.
We evaluate our approach in several robotic mapless navigation tasks and demonstrate that the violation metric computed with CROP allows higher returns and lower violations over previous Safe DRL approaches.
arXiv Detail & Related papers (2023-02-13T21:19:36Z) - How We Refactor and How We Document it? On the Use of Supervised Machine
Learning Algorithms to Classify Refactoring Documentation [25.626914797750487]
Refactoring is the art of improving the design of a system without altering its external behavior.
This study categorizes commits into 3 categories, namely, Internal QA, External QA, and Code Smell Resolution, along with the traditional BugFix and Functional categories.
To better understand our classification results, we analyzed commit messages to extract patterns that developers regularly use to describe their smells.
arXiv Detail & Related papers (2020-10-26T20:33:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.