Interpretable and Explainable Machine Learning Methods for Predictive
Process Monitoring: A Systematic Literature Review
- URL: http://arxiv.org/abs/2312.17584v1
- Date: Fri, 29 Dec 2023 12:43:43 GMT
- Title: Interpretable and Explainable Machine Learning Methods for Predictive
Process Monitoring: A Systematic Literature Review
- Authors: Nijat Mehdiyev, Maxim Majlatow and Peter Fettke
- Abstract summary: This paper presents a systematic review on the explainability and interpretability of machine learning (ML) models within the context of predictive process mining.
We provide a comprehensive overview of the current methodologies and their applications across various application domains.
Our findings aim to equip researchers and practitioners with a deeper understanding of how to develop and implement more trustworthy, transparent, and effective intelligent systems for process analytics.
- Score: 1.3812010983144802
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a systematic literature review (SLR) on the
explainability and interpretability of machine learning (ML) models within the
context of predictive process mining, using the PRISMA framework. Given the
rapid advancement of artificial intelligence (AI) and ML systems, understanding
the "black-box" nature of these technologies has become increasingly critical.
Focusing specifically on the domain of process mining, this paper delves into
the challenges of interpreting ML models trained with complex business process
data. We differentiate between intrinsically interpretable models and those
that require post-hoc explanation techniques, providing a comprehensive
overview of the current methodologies and their applications across various
application domains. Through a rigorous bibliographic analysis, this research
offers a detailed synthesis of the state of explainability and interpretability
in predictive process mining, identifying key trends, challenges, and future
directions. Our findings aim to equip researchers and practitioners with a
deeper understanding of how to develop and implement more trustworthy,
transparent, and effective intelligent systems for predictive process
analytics.
Related papers
- Causal Inference Tools for a Better Evaluation of Machine Learning [0.0]
We introduce key statistical methods such as Ordinary Least Squares (OLS) regression, Analysis of Variance (ANOVA) and logistic regression.
The document serves as a guide for researchers and practitioners, detailing how these techniques can provide deeper insights into model behavior, performance, and fairness.
arXiv Detail & Related papers (2024-10-02T10:03:29Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - A Review of AI and Machine Learning Contribution in Predictive Business Process Management (Process Enhancement and Process Improvement Approaches) [4.499009117849108]
We perform a systematic review of academic literature to investigate the integration of AI/ML in business process management.
In business process management and process map, AI/ML has made significant improvements using operational data on process metrics.
arXiv Detail & Related papers (2024-07-07T18:26:00Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Designing Explainable Predictive Machine Learning Artifacts: Methodology
and Practical Demonstration [0.0]
Decision-makers from companies across various industries are still largely reluctant to employ applications based on modern machine learning algorithms.
We ascribe this issue to the widely held view on advanced machine learning algorithms as "black boxes"
We develop a methodology which unifies methodological knowledge from design science research and predictive analytics with state-of-the-art approaches to explainable artificial intelligence.
arXiv Detail & Related papers (2023-06-20T15:11:26Z) - Explainable Artificial Intelligence for Improved Modeling of Processes [6.29494485203591]
We evaluate the capability of modern Transformer architectures and more classical Machine Learning technologies of modeling process regularities.
We show that the ML models are capable of predicting critical outcomes and that the attention mechanisms or XAI components offer new insights into the underlying processes.
arXiv Detail & Related papers (2022-12-01T17:56:24Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Explainable AI Enabled Inspection of Business Process Prediction Models [2.5229940062544496]
We present an approach that allows us to use model explanations to investigate certain reasoning applied by machine learned predictions.
A novel contribution of our approach is the proposal of model inspection that leverages both the explanations generated by interpretable machine learning mechanisms and the contextual or domain knowledge extracted from event logs that record historical process execution.
arXiv Detail & Related papers (2021-07-16T06:51:18Z) - Technology Readiness Levels for Machine Learning Systems [107.56979560568232]
Development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
We have developed a proven systems engineering approach for machine learning development and deployment.
Our "Machine Learning Technology Readiness Levels" framework defines a principled process to ensure robust, reliable, and responsible systems.
arXiv Detail & Related papers (2021-01-11T15:54:48Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.