Local Post-Hoc Explanations for Predictive Process Monitoring in
Manufacturing
- URL: http://arxiv.org/abs/2009.10513v2
- Date: Thu, 10 Jun 2021 08:58:41 GMT
- Title: Local Post-Hoc Explanations for Predictive Process Monitoring in
Manufacturing
- Authors: Nijat Mehdiyev and Peter Fettke
- Abstract summary: This study proposes an innovative explainable predictive quality analytics solution to facilitate data-driven decision-making in manufacturing.
It combines process mining, machine learning, and explainable artificial intelligence (XAI) methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study proposes an innovative explainable predictive quality analytics
solution to facilitate data-driven decision-making for process planning in
manufacturing by combining process mining, machine learning, and explainable
artificial intelligence (XAI) methods. For this purpose, after integrating the
top-floor and shop-floor data obtained from various enterprise information
systems, a deep learning model was applied to predict the process outcomes.
Since this study aims to operationalize the delivered predictive insights by
embedding them into decision-making processes, it is essential to generate
relevant explanations for domain experts. To this end, two complementary local
post-hoc explanation approaches, Shapley values and Individual Conditional
Expectation (ICE) plots are adopted, which are expected to enhance the
decision-making capabilities by enabling experts to examine explanations from
different perspectives. After assessing the predictive strength of the applied
deep neural network with relevant binary classification evaluation measures, a
discussion of the generated explanations is provided.
Related papers
- Generating Feasible and Plausible Counterfactual Explanations for Outcome Prediction of Business Processes [45.502284864662585]
We introduce a data-driven approach, REVISEDplus, to generate plausible counterfactual explanations.
First, we restrict the counterfactual algorithm to generate counterfactuals that lie within a high-density region of the process data.
We also ensure plausibility by learning sequential patterns between the activities in the process cases.
arXiv Detail & Related papers (2024-03-14T09:56:35Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Quantifying and Explaining Machine Learning Uncertainty in Predictive
Process Monitoring: An Operations Research Perspective [0.0]
This paper introduces a comprehensive, multi-stage machine learning methodology that integrates information systems and artificial intelligence.
The proposed framework adeptly addresses common limitations of existing solutions, such as the neglect of data-driven estimation.
Our approach employs Quantile Regression Forests for generating interval predictions, alongside both local and global variants of SHapley Additive Explanations.
arXiv Detail & Related papers (2023-04-13T11:18:22Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Predicting and Understanding Human Action Decisions during Skillful
Joint-Action via Machine Learning and Explainable-AI [1.3381749415517021]
This study uses supervised machine learning and explainable artificial intelligence to model, predict and understand human decision-making.
Long short-term memory networks were trained to predict the target selection decisions of expert and novice actors completing a dyadic herding task.
arXiv Detail & Related papers (2022-06-06T16:54:43Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Explainable AI Enabled Inspection of Business Process Prediction Models [2.5229940062544496]
We present an approach that allows us to use model explanations to investigate certain reasoning applied by machine learned predictions.
A novel contribution of our approach is the proposal of model inspection that leverages both the explanations generated by interpretable machine learning mechanisms and the contextual or domain knowledge extracted from event logs that record historical process execution.
arXiv Detail & Related papers (2021-07-16T06:51:18Z) - Forethought and Hindsight in Credit Assignment [62.05690959741223]
We work to understand the gains and peculiarities of planning employed as forethought via forward models or as hindsight operating with backward models.
We investigate the best use of models in planning, primarily focusing on the selection of states in which predictions should be (re)-evaluated.
arXiv Detail & Related papers (2020-10-26T16:00:47Z) - Explainable Artificial Intelligence for Process Mining: A General
Overview and Application of a Novel Local Explanation Approach for Predictive
Process Monitoring [0.0]
This study proposes a conceptual framework sought to establish and promote understanding of decision-making environment.
This study defines the local regions from the validation dataset by using the intermediate latent space representations.
The adopted deep learning classifier achieves a good performance with the Area Under the ROC Curve of 0.94.
arXiv Detail & Related papers (2020-09-04T10:28:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.