Monitoring Machine Learning Systems: A Multivocal Literature Review
- URL: http://arxiv.org/abs/2509.14294v1
- Date: Wed, 17 Sep 2025 02:18:13 GMT
- Title: Monitoring Machine Learning Systems: A Multivocal Literature Review
- Authors: Hira Naveed, Scott Barnett, Chetan Arora, John Grundy, Hourieh Khalajzadeh, Omar Haggag,
- Abstract summary: We conducted a multivocal literature review (MLR) following the well established guidelines by Garousi.<n>We analyzed studies based on four key areas: (1) the motivations, goals, and context; (2) the monitored aspects, specific techniques, metrics, and tools; (3) the contributions and benefits; and (4) the current limitations.
- Score: 6.493534763461715
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Context: Dynamic production environments make it challenging to maintain reliable machine learning (ML) systems. Runtime issues, such as changes in data patterns or operating contexts, that degrade model performance are a common occurrence in production settings. Monitoring enables early detection and mitigation of these runtime issues, helping maintain users' trust and prevent unwanted consequences for organizations. Aim: This study aims to provide a comprehensive overview of the ML monitoring literature. Method: We conducted a multivocal literature review (MLR) following the well established guidelines by Garousi to investigate various aspects of ML monitoring approaches in 136 papers. Results: We analyzed selected studies based on four key areas: (1) the motivations, goals, and context; (2) the monitored aspects, specific techniques, metrics, and tools; (3) the contributions and benefits; and (4) the current limitations. We also discuss several insights found in the studies, their implications, and recommendations for future research and practice. Conclusion: Our MLR identifies and summarizes ML monitoring practices and gaps, emphasizing similarities and disconnects between formal and gray literature. Our study is valuable for both academics and practitioners, as it helps select appropriate solutions, highlights limitations in current approaches, and provides future directions for research and tool development.
Related papers
- MDK12-Bench: A Comprehensive Evaluation of Multimodal Large Language Models on Multidisciplinary Exams [50.293164501645975]
Multimodal large language models (MLLMs) integrate language and visual cues for problem-solving.<n>Current benchmarks for measuring the intelligence of MLLMs suffer from limited scale, narrow coverage, and unstructured knowledge.<n>We introduce MDK12-Bench, a large-scale multidisciplinary benchmark built from real-world K-12 exams spanning six disciplines.
arXiv Detail & Related papers (2025-08-09T06:21:10Z) - From Tea Leaves to System Maps: Context-awareness in Monitoring Operational Machine Learning Models [10.17792666432021]
This paper presents a systematic review to characterize and structure the various types of contextual information in this domain.<n>We introduce the Contextual System--Aspect--Representation (C-SAR) framework, a conceptual model that synthesizes our findings.<n>We also identify 20 recurring and potentially reusable patterns of specific system, aspect, and representation triplets, and map them to the monitoring activities they support.
arXiv Detail & Related papers (2025-06-12T14:49:42Z) - Trends and Challenges in Authorship Analysis: A Review of ML, DL, and LLM Approaches [1.8686807993563161]
Authorship analysis plays an important role in diverse domains, including forensic linguistics, academia, cybersecurity, and digital content authentication.<n>This paper presents a systematic literature review on two key sub-tasks of authorship analysis; Author Attribution and Author Verification.
arXiv Detail & Related papers (2025-05-21T12:06:08Z) - How do Large Language Models Understand Relevance? A Mechanistic Interpretability Perspective [64.00022624183781]
Large language models (LLMs) can assess relevance and support information retrieval (IR) tasks.<n>We investigate how different LLM modules contribute to relevance judgment through the lens of mechanistic interpretability.
arXiv Detail & Related papers (2025-04-10T16:14:55Z) - Survey on AI-Generated Media Detection: From Non-MLLM to MLLM [51.91311158085973]
Methods for detecting AI-generated media have evolved rapidly.<n>General-purpose detectors based on MLLMs integrate authenticity verification, explainability, and localization capabilities.<n>Ethical and security considerations have emerged as critical global concerns.
arXiv Detail & Related papers (2025-02-07T12:18:20Z) - Perspective of Software Engineering Researchers on Machine Learning Practices Regarding Research, Review, and Education [12.716955305620191]
This study aims to contribute to the knowledge, about the synergy between Machine Learning (ML) and Software Engineering (SE)<n>We analyzed SE researchers familiar with ML or who authored SE articles using ML, along with the articles themselves.<n>We found diverse practices focusing on data collection, model training, and evaluation.
arXiv Detail & Related papers (2024-11-28T18:21:24Z) - MME-Survey: A Comprehensive Survey on Evaluation of Multimodal LLMs [97.94579295913606]
Multimodal Large Language Models (MLLMs) have garnered increased attention from both industry and academia.<n>In the development process, evaluation is critical since it provides intuitive feedback and guidance on improving models.<n>This work aims to offer researchers an easy grasp of how to effectively evaluate MLLMs according to different needs and to inspire better evaluation methods.
arXiv Detail & Related papers (2024-11-22T18:59:54Z) - Surveying the MLLM Landscape: A Meta-Review of Current Surveys [17.372501468675303]
Multimodal Large Language Models (MLLMs) have become a transformative force in the field of artificial intelligence.
This survey aims to provide a systematic review of benchmark tests and evaluation methods for MLLMs.
arXiv Detail & Related papers (2024-09-17T14:35:38Z) - Tool Learning with Large Language Models: A Survey [60.733557487886635]
Tool learning with large language models (LLMs) has emerged as a promising paradigm for augmenting the capabilities of LLMs to tackle highly complex problems.
Despite growing attention and rapid advancements in this field, the existing literature remains fragmented and lacks systematic organization.
arXiv Detail & Related papers (2024-05-28T08:01:26Z) - Model-driven Engineering for Machine Learning Components: A Systematic
Literature Review [8.810090413018798]
We analyzed studies with respect to several areas of interest and identified the key motivations behind using MDE4ML.
We also discuss the gaps in existing literature and provide recommendations for future work.
arXiv Detail & Related papers (2023-11-01T04:29:47Z) - Instruction Tuning for Large Language Models: A Survey [52.86322823501338]
We make a systematic review of the literature, including the general methodology of supervised fine-tuning (SFT)<n>We also review the potential pitfalls of SFT along with criticism against it, along with efforts pointing out current deficiencies of existing strategies.
arXiv Detail & Related papers (2023-08-21T15:35:16Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.