Towards Understanding the Impact of Code Modifications on Software Quality Metrics
- URL: http://arxiv.org/abs/2404.03953v1
- Date: Fri, 5 Apr 2024 08:41:18 GMT
- Title: Towards Understanding the Impact of Code Modifications on Software Quality Metrics
- Authors: Thomas Karanikiotis, Andreas L. Symeonidis,
- Abstract summary: This study aims to assess and interpret the impact of code modifications on software quality metrics.
The underlying hypothesis posits that code modifications inducing similar changes in software quality metrics can be grouped into distinct clusters.
The results reveal distinct clusters of code modifications, each accompanied by a concise description, revealing their collective impact on software quality metrics.
- Score: 1.2277343096128712
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Context: In the realm of software development, maintaining high software quality is a persistent challenge. However, this challenge is often impeded by the lack of comprehensive understanding of how specific code modifications influence quality metrics. Objective: This study ventures to bridge this gap through an approach that aspires to assess and interpret the impact of code modifications. The underlying hypothesis posits that code modifications inducing similar changes in software quality metrics can be grouped into distinct clusters, which can be effectively described using an AI language model, thus providing a simple understanding of code changes and their quality implications. Method: To validate this hypothesis, we built and analyzed a dataset from popular GitHub repositories, segmented into individual code modifications. Each project was evaluated against software quality metrics pre and post-application. Machine learning techniques were utilized to cluster these modifications based on the induced changes in the metrics. Simultaneously, an AI language model was employed to generate descriptions of each modification's function. Results: The results reveal distinct clusters of code modifications, each accompanied by a concise description, revealing their collective impact on software quality metrics. Conclusions: The findings suggest that this research is a significant step towards a comprehensive understanding of the complex relationship between code changes and software quality, which has the potential to transform software maintenance strategies and enable the development of more accurate quality prediction models.
Related papers
- Understanding Code Understandability Improvements in Code Reviews [79.16476505761582]
We analyzed 2,401 code review comments from Java open-source projects on GitHub.
83.9% of suggestions for improvement were accepted and integrated, with fewer than 1% later reverted.
arXiv Detail & Related papers (2024-10-29T12:21:23Z) - Improving LLM Reasoning through Scaling Inference Computation with Collaborative Verification [52.095460362197336]
Large language models (LLMs) struggle with consistent and accurate reasoning.
LLMs are trained primarily on correct solutions, reducing their ability to detect and learn from errors.
We propose a novel collaborative method integrating Chain-of-Thought (CoT) and Program-of-Thought (PoT) solutions for verification.
arXiv Detail & Related papers (2024-10-05T05:21:48Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - Leveraging Large Language Models for Efficient Failure Analysis in Game Development [47.618236610219554]
This paper proposes a new approach to automatically identify which change in the code caused a test to fail.
The method leverages Large Language Models (LLMs) to associate error messages with the corresponding code changes causing the failure.
Our approach reaches an accuracy of 71% in our newly created dataset, which comprises issues reported by developers at EA over a period of one year.
arXiv Detail & Related papers (2024-06-11T09:21:50Z) - Free Open Source Communities Sustainability: Does It Make a Difference
in Software Quality? [2.981092370528753]
This study aims to empirically explore how the different aspects of sustainability impact software quality.
16 sustainability metrics across four categories were sampled and applied to a set of 217 OSS projects.
arXiv Detail & Related papers (2024-02-10T09:37:44Z) - Do Internal Software Metrics Have Relationship with Fault-proneness and Change-proneness? [1.9526430269580959]
We identified 25 internal software metrics along with the measures of change-proneness and fault-proneness within the Apache and Eclipse ecosystems.
Most of the metrics have little to no correlation with fault-proneness.
metrics related to inheritance, coupling, and comments showed a moderate to high correlation with change-proneness.
arXiv Detail & Related papers (2023-09-23T07:19:41Z) - Quantifying Process Quality: The Role of Effective Organizational
Learning in Software Evolution [0.0]
Real-world software applications must constantly evolve to remain relevant.
Traditional methods of software quality control involve software quality models and continuous code inspection tools.
However, there is a strong correlation and causation between the quality of the development process and the resulting software product.
arXiv Detail & Related papers (2023-05-29T12:57:14Z) - An Analysis of the Effects of Decoding Algorithms on Fairness in
Open-Ended Language Generation [77.44921096644698]
We present a systematic analysis of the impact of decoding algorithms on LM fairness.
We analyze the trade-off between fairness, diversity and quality.
arXiv Detail & Related papers (2022-10-07T21:33:34Z) - The Mind Is a Powerful Place: How Showing Code Comprehensibility Metrics
Influences Code Understanding [10.644832702859484]
We investigate whether a displayed metric value for source code comprehensibility anchors developers in their subjective rating of source code comprehensibility.
We found that the displayed value of a comprehensibility metric has a significant and large anchoring effect on a developer's code comprehensibility rating.
arXiv Detail & Related papers (2020-12-16T14:27:45Z) - GO FIGURE: A Meta Evaluation of Factuality in Summarization [131.1087461486504]
We introduce GO FIGURE, a meta-evaluation framework for evaluating factuality evaluation metrics.
Our benchmark analysis on ten factuality metrics reveals that our framework provides a robust and efficient evaluation.
It also reveals that while QA metrics generally improve over standard metrics that measure factuality across domains, performance is highly dependent on the way in which questions are generated.
arXiv Detail & Related papers (2020-10-24T08:30:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.