Quality Assurance Challenges for Machine Learning Software Applications
During Software Development Life Cycle Phases
- URL: http://arxiv.org/abs/2105.01195v1
- Date: Mon, 3 May 2021 22:29:23 GMT
- Title: Quality Assurance Challenges for Machine Learning Software Applications
During Software Development Life Cycle Phases
- Authors: Md Abdullah Al Alamin, Gias Uddin
- Abstract summary: The paper conducts an in-depth review of literature on the quality assurance of Machine Learning (ML) models.
We develop a taxonomy of MLSA quality assurance issues by mapping the various ML adoption challenges across different phases of software development life cycles (SDLC)
This mapping can help prioritize quality assurance efforts of MLSAs where the adoption of ML models can be considered crucial.
- Score: 1.4213973379473654
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the past decades, the revolutionary advances of Machine Learning (ML) have
shown a rapid adoption of ML models into software systems of diverse types.
Such Machine Learning Software Applications (MLSAs) are gaining importance in
our daily lives. As such, the Quality Assurance (QA) of MLSAs is of paramount
importance. Several research efforts are dedicated to determining the specific
challenges we can face while adopting ML models into software systems. However,
we are aware of no research that offered a holistic view of the distribution of
those ML quality assurance challenges across the various phases of software
development life cycles (SDLC). This paper conducts an in-depth literature
review of a large volume of research papers that focused on the quality
assurance of ML models. We developed a taxonomy of MLSA quality assurance
issues by mapping the various ML adoption challenges across different phases of
SDLC. We provide recommendations and research opportunities to improve SDLC
practices based on the taxonomy. This mapping can help prioritize quality
assurance efforts of MLSAs where the adoption of ML models can be considered
crucial.
Related papers
- Characterization of Large Language Model Development in the Datacenter [55.9909258342639]
Large Language Models (LLMs) have presented impressive performance across several transformative tasks.
However, it is non-trivial to efficiently utilize large-scale cluster resources to develop LLMs.
We present an in-depth characterization study of a six-month LLM development workload trace collected from our GPU datacenter Acme.
arXiv Detail & Related papers (2024-03-12T13:31:14Z) - SWITCH: An Exemplar for Evaluating Self-Adaptive ML-Enabled Systems [1.2277343096128712]
Machine Learning-Enabled Systems (MLS) is crucial for maintaining Quality of Service (QoS)
The Machine Learning Model Balancer is a concept that addresses these uncertainties by facilitating dynamic ML model switching.
This paper introduces SWITCH, an exemplar developed to enhance self-adaptive capabilities in such systems.
arXiv Detail & Related papers (2024-02-09T11:56:44Z) - Towards Self-Adaptive Machine Learning-Enabled Systems Through QoS-Aware
Model Switching [1.2277343096128712]
We propose the concept of a Machine Learning Model Balancer, focusing on managing uncertainties related to ML models by using multiple models.
AdaMLS is a novel self-adaptation approach that leverages this concept and extends the traditional MAPE-K loop for continuous MLS adaptation.
Preliminary results suggest AdaMLS surpasses naive and single state-of-the-art models in guarantees.
arXiv Detail & Related papers (2023-08-19T09:33:51Z) - Quality Issues in Machine Learning Software Systems [10.797981721308226]
There is a strong need for ensuring the serving quality of Machine Learning Software Systems.
This paper aims to investigate the characteristics of real quality issues in MLSSs from the viewpoint of practitioners.
We identify 18 recurring quality issues and 24 strategies to mitigate them.
arXiv Detail & Related papers (2023-06-26T18:46:46Z) - Machine Learning for Software Engineering: A Tertiary Study [13.832268599253412]
Machine learning (ML) techniques increase the effectiveness of software engineering (SE) lifecycle activities.
We systematically collected, quality-assessed, summarized, and categorized 83 reviews in ML for SE published between 2009-2022, covering 6,117 primary studies.
The SE areas most tackled with ML are software quality and testing, while human-centered areas appear more challenging for ML.
arXiv Detail & Related papers (2022-11-17T09:19:53Z) - Quality issues in Machine Learning Software Systems [12.655311590103238]
This paper aims to investigate the characteristics of real quality issues in MLSSs from the viewpoint of practitioners.
We expect that the catalog of issues developed at this step will also help us later to identify the severity, root causes, and possible remedy for quality issues of MLSSs.
arXiv Detail & Related papers (2022-08-18T17:55:18Z) - LightAutoML: AutoML Solution for a Large Financial Services Ecosystem [108.09104876115428]
We present an AutoML system called LightAutoML developed for a large European financial services company.
Our framework was piloted and deployed in numerous applications and performed at the level of the experienced data scientists.
arXiv Detail & Related papers (2021-09-03T13:52:32Z) - Panoramic Learning with A Standardized Machine Learning Formalism [116.34627789412102]
This paper presents a standardized equation of the learning objective, that offers a unifying understanding of diverse ML algorithms.
It also provides guidance for mechanic design of new ML solutions, and serves as a promising vehicle towards panoramic learning with all experiences.
arXiv Detail & Related papers (2021-08-17T17:44:38Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Technology Readiness Levels for Machine Learning Systems [107.56979560568232]
Development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
We have developed a proven systems engineering approach for machine learning development and deployment.
Our "Machine Learning Technology Readiness Levels" framework defines a principled process to ensure robust, reliable, and responsible systems.
arXiv Detail & Related papers (2021-01-11T15:54:48Z) - Technology Readiness Levels for AI & ML [79.22051549519989]
Development of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
Engineering systems follow well-defined processes and testing standards to streamline development for high-quality, reliable results.
We propose a proven systems engineering approach for machine learning development and deployment.
arXiv Detail & Related papers (2020-06-21T17:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.