State-Of-The-Practice in Quality Assurance in Java-Based Open Source
Software Development
- URL: http://arxiv.org/abs/2306.09665v1
- Date: Fri, 16 Jun 2023 07:43:11 GMT
- Title: State-Of-The-Practice in Quality Assurance in Java-Based Open Source
Software Development
- Authors: Ali Khatami, Andy Zaidman
- Abstract summary: We investigate whether and how quality assurance approaches are being used in conjunction in the development of 1,454 popular open source software projects on GitHub.
Our study indicates that typically projects do not follow all quality assurance practices together with high intensity.
In general, our study provides a deeper understanding of how existing quality assurance approaches are currently being used in Java-based open source software development.
- Score: 3.4800665691198565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To ensure the quality of software systems, software engineers can make use of
a variety of quality assurance approaches, such as software testing, modern
code review, automated static analysis, and build automation. Each of these
quality assurance practices has been studied in depth in isolation, but there
is a clear knowledge gap when it comes to our understanding of how these
approaches are being used in conjunction or not. In our study, we broadly
investigate whether and how these quality assurance approaches are being used
in conjunction in the development of 1,454 popular open source software
projects on GitHub. Our study indicates that typically projects do not follow
all quality assurance practices together with high intensity. In fact, we only
observe weak correlation among some quality assurance practices. In general,
our study provides a deeper understanding of how existing quality assurance
approaches are currently being used in Java-based open source software
development. Besides, we specifically zoomed in on the more mature projects in
our dataset, and generally, we observe that more mature projects are more
intense in their application of the quality assurance practices, with more
focus on their ASAT usage and code reviewing, but no strong change in their CI
usage.
Related papers
- Quality Assurance Practices in Agile Methodology [0.0]
The complexity of software is increasing day by day the requirement and need for a verity of softwareproducts increases.
The practice of applying software metrics to the development process and to asoftware product is a critical task and crucial enough that requires study and discipline.
arXiv Detail & Related papers (2024-11-07T19:45:40Z) - Lingma SWE-GPT: An Open Development-Process-Centric Language Model for Automated Software Improvement [62.94719119451089]
Lingma SWE-GPT series learns from and simulating real-world code submission activities.
Lingma SWE-GPT 72B resolves 30.20% of GitHub issues, marking a significant improvement in automatic issue resolution.
arXiv Detail & Related papers (2024-11-01T14:27:16Z) - How to Measure Performance in Agile Software Development? A Mixed-Method Study [2.477589198476322]
The study aims to identify challenges that arise when using agile software development performance metrics in practice.
Results show that while widely used performance metrics are widely used in practice, agile software development teams face challenges due to a lack of transparency and standardization as well as insufficient accuracy.
arXiv Detail & Related papers (2024-07-08T19:53:01Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - Leveraging Large Language Models for Efficient Failure Analysis in Game Development [47.618236610219554]
This paper proposes a new approach to automatically identify which change in the code caused a test to fail.
The method leverages Large Language Models (LLMs) to associate error messages with the corresponding code changes causing the failure.
Our approach reaches an accuracy of 71% in our newly created dataset, which comprises issues reported by developers at EA over a period of one year.
arXiv Detail & Related papers (2024-06-11T09:21:50Z) - A Roadmap for Software Testing in Open Collaborative Development Environments [14.113209837391183]
The distributed nature of open collaborative development, along with its diverse contributors and rapid iterations, presents new challenges for ensuring software quality.
This paper offers a comprehensive review and analysis of recent advancements in software quality assurance within open collaborative development environments.
arXiv Detail & Related papers (2024-06-08T10:50:24Z) - Automatic Programming: Large Language Models and Beyond [48.34544922560503]
We study concerns around code quality, security and related issues of programmer responsibility.
We discuss how advances in software engineering can enable automatic programming.
We conclude with a forward looking view, focusing on the programming environment of the near future.
arXiv Detail & Related papers (2024-05-03T16:19:24Z) - Code Ownership in Open-Source AI Software Security [18.779538756226298]
We use code ownership metrics to investigate the correlation with latent vulnerabilities across five prominent open-source AI software projects.
The findings suggest a positive relationship between high-level ownership (characterised by a limited number of minor contributors) and a decrease in vulnerabilities.
With these novel code ownership metrics, we have implemented a Python-based command-line application to aid project curators and quality assurance professionals in evaluating and benchmarking their on-site projects.
arXiv Detail & Related papers (2023-12-18T00:37:29Z) - Using Machine Learning To Identify Software Weaknesses From Software
Requirement Specifications [49.1574468325115]
This research focuses on finding an efficient machine learning algorithm to identify software weaknesses from requirement specifications.
Keywords extracted using latent semantic analysis help map the CWE categories to PROMISE_exp. Naive Bayes, support vector machine (SVM), decision trees, neural network, and convolutional neural network (CNN) algorithms were tested.
arXiv Detail & Related papers (2023-08-10T13:19:10Z) - Uncertainty Quantification 360: A Holistic Toolkit for Quantifying and
Communicating the Uncertainty of AI [49.64037266892634]
We describe an open source Python toolkit named Uncertainty Quantification 360 (UQ360) for the uncertainty quantification of AI models.
The goal of this toolkit is twofold: first, to provide a broad range of capabilities to streamline as well as foster the common practices of quantifying, evaluating, improving, and communicating uncertainty in the AI application development lifecycle; second, to encourage further exploration of UQ's connections to other pillars of trustworthy AI.
arXiv Detail & Related papers (2021-06-02T18:29:04Z) - Quality Management of Machine Learning Systems [0.0]
Artificial Intelligence (AI) has become a part of our daily lives due to major advances in Machine Learning (ML) techniques.
For business/mission-critical systems, serious concerns about reliability and maintainability of AI applications remain.
This paper presents a view of a holistic quality management framework for ML applications based on the current advances.
arXiv Detail & Related papers (2020-06-16T21:34:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.