Quantifying Process Quality: The Role of Effective Organizational
Learning in Software Evolution
- URL: http://arxiv.org/abs/2305.18061v4
- Date: Wed, 30 Aug 2023 12:00:02 GMT
- Title: Quantifying Process Quality: The Role of Effective Organizational
Learning in Software Evolution
- Authors: Sebastian H\"onel
- Abstract summary: Real-world software applications must constantly evolve to remain relevant.
Traditional methods of software quality control involve software quality models and continuous code inspection tools.
However, there is a strong correlation and causation between the quality of the development process and the resulting software product.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Real-world software applications must constantly evolve to remain relevant.
This evolution occurs when developing new applications or adapting existing
ones to meet new requirements, make corrections, or incorporate future
functionality. Traditional methods of software quality control involve software
quality models and continuous code inspection tools. These measures focus on
directly assessing the quality of the software. However, there is a strong
correlation and causation between the quality of the development process and
the resulting software product. Therefore, improving the development process
indirectly improves the software product, too. To achieve this, effective
learning from past processes is necessary, often embraced through post mortem
organizational learning. While qualitative evaluation of large artifacts is
common, smaller quantitative changes captured by application lifecycle
management are often overlooked. In addition to software metrics, these smaller
changes can reveal complex phenomena related to project culture and management.
Leveraging these changes can help detect and address such complex issues.
Software evolution was previously measured by the size of changes, but the
lack of consensus on a reliable and versatile quantification method prevents
its use as a dependable metric. Different size classifications fail to reliably
describe the nature of evolution. While application lifecycle management data
is rich, identifying which artifacts can model detrimental managerial practices
remains uncertain. Approaches such as simulation modeling, discrete events
simulation, or Bayesian networks have only limited ability to exploit
continuous-time process models of such phenomena. Even worse, the accessibility
and mechanistic insight into such gray- or black-box models are typically very
low. To address these challenges, we suggest leveraging objectively [...]
Related papers
- Lingma SWE-GPT: An Open Development-Process-Centric Language Model for Automated Software Improvement [62.94719119451089]
Lingma SWE-GPT series learns from and simulating real-world code submission activities.
Lingma SWE-GPT 72B resolves 30.20% of GitHub issues, marking a significant improvement in automatic issue resolution.
arXiv Detail & Related papers (2024-11-01T14:27:16Z) - How to Measure Performance in Agile Software Development? A Mixed-Method Study [2.477589198476322]
The study aims to identify challenges that arise when using agile software development performance metrics in practice.
Results show that while widely used performance metrics are widely used in practice, agile software development teams face challenges due to a lack of transparency and standardization as well as insufficient accuracy.
arXiv Detail & Related papers (2024-07-08T19:53:01Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - Towards Understanding the Impact of Code Modifications on Software Quality Metrics [1.2277343096128712]
This study aims to assess and interpret the impact of code modifications on software quality metrics.
The underlying hypothesis posits that code modifications inducing similar changes in software quality metrics can be grouped into distinct clusters.
The results reveal distinct clusters of code modifications, each accompanied by a concise description, revealing their collective impact on software quality metrics.
arXiv Detail & Related papers (2024-04-05T08:41:18Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Do Internal Software Metrics Have Relationship with Fault-proneness and Change-proneness? [1.9526430269580959]
We identified 25 internal software metrics along with the measures of change-proneness and fault-proneness within the Apache and Eclipse ecosystems.
Most of the metrics have little to no correlation with fault-proneness.
metrics related to inheritance, coupling, and comments showed a moderate to high correlation with change-proneness.
arXiv Detail & Related papers (2023-09-23T07:19:41Z) - Contrastive Example-Based Control [163.6482792040079]
We propose a method for offline, example-based control that learns an implicit model of multi-step transitions, rather than a reward function.
Across a range of state-based and image-based offline control tasks, our method outperforms baselines that use learned reward functions.
arXiv Detail & Related papers (2023-07-24T19:43:22Z) - NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision
Research [96.53307645791179]
We introduce the Never-Ending VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of over 100 visual classification tasks.
Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth.
Overall, NEVIS'22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks.
arXiv Detail & Related papers (2022-11-15T18:57:46Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Software Effort Estimation using parameter tuned Models [1.9336815376402716]
The imprecision of the estimation is the reason for Project Failure.
The greatest pitfall of the software industry was the fast-changing nature of software development.
We need the development of useful models that accurately predict the cost of developing a software product.
arXiv Detail & Related papers (2020-08-25T15:18:59Z) - Many-Objective Software Remodularization using NSGA-III [17.487053547108516]
We propose a novel many-objective search-based approach using NSGA-III.
The process aims at finding the optimal remodularization solutions that improve the structure of packages, minimize the number of changes, preserve semantics coherence, and re-use the history of changes.
arXiv Detail & Related papers (2020-05-13T18:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.