More Engineering, No Silos: Rethinking Processes and Interfaces in
Collaboration between Interdisciplinary Teams for Machine Learning Projects
- URL: http://arxiv.org/abs/2110.10234v1
- Date: Tue, 19 Oct 2021 20:03:20 GMT
- Title: More Engineering, No Silos: Rethinking Processes and Interfaces in
Collaboration between Interdisciplinary Teams for Machine Learning Projects
- Authors: Nadia Nahar, Shurui Zhou, Grace Lewis, Christian K\"astner
- Abstract summary: We identify key collaboration challenges that teams face when building and deploying machine learning systems into production.
We report on common collaboration points in the development of production ML systems for requirements, data, and integration, as well as corresponding team patterns and challenges.
- Score: 4.482886054198202
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The introduction of machine learning (ML) components in software projects has
created the need for software engineers to collaborate with data scientists and
other specialists. While collaboration can always be challenging, ML introduces
additional challenges with its exploratory model development process,
additional skills and knowledge needed, difficulties testing ML systems, need
for continuous evolution and monitoring, and non-traditional quality
requirements such as fairness and explainability. Through interviews with 45
practitioners from 28 organizations, we identified key collaboration challenges
that teams face when building and deploying ML systems into production. We
report on common collaboration points in the development of production ML
systems for requirements, data, and integration, as well as corresponding team
patterns and challenges. We find that most of these challenges center around
communication, documentation, engineering, and process and collect
recommendations to address these challenges.
Related papers
- Federated Large Language Models: Current Progress and Future Directions [63.68614548512534]
This paper surveys Federated learning for LLMs (FedLLM), highlighting recent advances and future directions.
We focus on two key aspects: fine-tuning and prompt learning in a federated setting, discussing existing work and associated research challenges.
arXiv Detail & Related papers (2024-09-24T04:14:33Z) - Towards Effective Collaboration between Software Engineers and Data Scientists developing Machine Learning-Enabled Systems [1.1153433121962064]
Development of Machine Learning (ML)-enabled systems encompasses several social and technical challenges.
This paper has the objective of understanding how to enhance the collaboration between two key actors in building these systems: software engineers and data scientists.
Our research has found that collaboration between these actors is important for effectively developing ML-enabled systems.
arXiv Detail & Related papers (2024-07-22T17:35:18Z) - On the Interaction between Software Engineers and Data Scientists when
building Machine Learning-Enabled Systems [1.2184324428571227]
Machine Learning (ML) components have been increasingly integrated into the core systems of organizations.
One of the key challenges is the effective interaction between actors with different backgrounds who need to work closely together.
This paper presents an exploratory case study to understand the current interaction and collaboration dynamics between these roles in ML projects.
arXiv Detail & Related papers (2024-02-08T00:27:56Z) - Competition-Level Problems are Effective LLM Evaluators [121.15880285283116]
This paper aims to evaluate the reasoning capacities of large language models (LLMs) in solving recent programming problems in Codeforces.
We first provide a comprehensive evaluation of GPT-4's peiceived zero-shot performance on this task, considering various aspects such as problems' release time, difficulties, and types of errors encountered.
Surprisingly, theThoughtived performance of GPT-4 has experienced a cliff like decline in problems after September 2021 consistently across all the difficulties and types of problems.
arXiv Detail & Related papers (2023-12-04T18:58:57Z) - An Exploratory Study of V-Model in Building ML-Enabled Software: A Systems Engineering Perspective [0.7252027234425334]
Machine learning (ML) components are being added to more and more critical and impactful software systems.
This research investigates the use of V-Model in addressing the interdisciplinary collaboration challenges when building ML-enabled systems.
arXiv Detail & Related papers (2023-08-10T06:53:32Z) - Machine Learning Application Development: Practitioners' Insights [18.114724750441724]
We report about a survey that aimed to understand the challenges and best practices of ML application development.
We synthesize the results obtained from 80 practitioners into 17 findings; outlining challenges and best practices for ML application development.
We hope that the reported challenges will inform the research community about topics that need to be investigated to improve the engineering process and the quality of ML-based applications.
arXiv Detail & Related papers (2021-12-31T03:38:37Z) - Understanding the Usability Challenges of Machine Learning In
High-Stakes Decision Making [67.72855777115772]
Machine learning (ML) is being applied to a diverse and ever-growing set of domains.
In many cases, domain experts -- who often have no expertise in ML or data science -- are asked to use ML predictions to make high-stakes decisions.
We investigate the ML usability challenges present in the domain of child welfare screening through a series of collaborations with child welfare screeners.
arXiv Detail & Related papers (2021-03-02T22:50:45Z) - Technology Readiness Levels for Machine Learning Systems [107.56979560568232]
Development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
We have developed a proven systems engineering approach for machine learning development and deployment.
Our "Machine Learning Technology Readiness Levels" framework defines a principled process to ensure robust, reliable, and responsible systems.
arXiv Detail & Related papers (2021-01-11T15:54:48Z) - Technology Readiness Levels for AI & ML [79.22051549519989]
Development of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
Engineering systems follow well-defined processes and testing standards to streamline development for high-quality, reliable results.
We propose a proven systems engineering approach for machine learning development and deployment.
arXiv Detail & Related papers (2020-06-21T17:14:34Z) - Towards CRISP-ML(Q): A Machine Learning Process Model with Quality
Assurance Methodology [53.063411515511056]
We propose a process model for the development of machine learning applications.
The first phase combines business and data understanding as data availability oftentimes affects the feasibility of the project.
The sixth phase covers state-of-the-art approaches for monitoring and maintenance of a machine learning applications.
arXiv Detail & Related papers (2020-03-11T08:25:49Z) - Engineering AI Systems: A Research Agenda [9.84673609667263]
We provide a conceptualization of the typical evolution patterns that companies experience when employing machine learning.
The main contribution of the paper is a research agenda for AI engineering that provides an overview of the key engineering challenges surrounding ML solutions.
arXiv Detail & Related papers (2020-01-16T20:29:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.