Explainable Predictive Process Monitoring: A User Evaluation
- URL: http://arxiv.org/abs/2202.07760v1
- Date: Tue, 15 Feb 2022 22:24:21 GMT
- Title: Explainable Predictive Process Monitoring: A User Evaluation
- Authors: Williams Rizzi, Marco Comuzzi, Chiara Di Francescomarino, Chiara
Ghidini, Suhwan Lee, Fabrizio Maria Maggi, Alexander Nolte
- Abstract summary: Explainability is motivated by the lack of transparency of black-box Machine Learning approaches.
We carry on a user evaluation on explanation approaches for Predictive Process Monitoring.
- Score: 62.41400549499849
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainability is motivated by the lack of transparency of black-box Machine
Learning approaches, which do not foster trust and acceptance of Machine
Learning algorithms. This also happens in the Predictive Process Monitoring
field, where predictions, obtained by applying Machine Learning techniques,
need to be explained to users, so as to gain their trust and acceptance. In
this work, we carry on a user evaluation on explanation approaches for
Predictive Process Monitoring aiming at investigating whether and how the
explanations provided (i) are understandable; (ii) are useful in decision
making tasks;(iii) can be further improved for process analysts, with different
Machine Learning expertise levels. The results of the user evaluation show
that, although explanation plots are overall understandable and useful for
decision making tasks for Business Process Management users -- with and without
experience in Machine Learning -- differences exist in the comprehension and
usage of different plots, as well as in the way users with different Machine
Learning expertise understand and use them.
Related papers
- Matched Machine Learning: A Generalized Framework for Treatment Effect
Inference With Learned Metrics [87.05961347040237]
We introduce Matched Machine Learning, a framework that combines the flexibility of machine learning black boxes with the interpretability of matching.
Our framework uses machine learning to learn an optimal metric for matching units and estimating outcomes.
We show empirically that instances of Matched Machine Learning perform on par with black-box machine learning methods and better than existing matching methods for similar problems.
arXiv Detail & Related papers (2023-04-03T19:32:30Z) - Assisting Human Decisions in Document Matching [52.79491990823573]
We devise a proxy matching task that allows us to evaluate which kinds of assistive information improve decision makers' performance.
We find that providing black-box model explanations reduces users' accuracy on the matching task.
On the other hand, custom methods that are designed to closely attend to some task-specific desiderata are found to be effective in improving user performance.
arXiv Detail & Related papers (2023-02-16T17:45:20Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Explainability in Machine Learning: a Pedagogical Perspective [9.393988089692947]
We provide a pedagogical perspective on how to structure the learning process to better impart knowledge to students and researchers in machine learning.
We discuss the advantages and disadvantages of various opaque and transparent machine learning models.
We will also discuss ways to structure potential assignments to best help students learn to use explainability as a tool alongside any given machine learning application.
arXiv Detail & Related papers (2022-02-21T16:15:57Z) - Intuitiveness in Active Teaching [7.8029610421817654]
We analyze intuitiveness of certain algorithms when they are actively taught by users.
We offer a systematic method to judge the efficacy of human-machine interactions.
arXiv Detail & Related papers (2020-12-25T09:31:56Z) - The Role of Individual User Differences in Interpretable and Explainable
Machine Learning Systems [0.3169089186688223]
We study how individual skills and personality traits predict interpretability, explainability, and knowledge discovery from machine learning generated model output.
Our work relies on Fuzzy Trace Theory, a leading theory of how humans process numerical stimuli.
arXiv Detail & Related papers (2020-09-14T18:15:00Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - DECE: Decision Explorer with Counterfactual Explanations for Machine
Learning Models [36.50754934147469]
We exploit the potential of counterfactual explanations to understand and explore the behavior of machine learning models.
We design DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets.
arXiv Detail & Related papers (2020-08-19T09:44:47Z) - Toward Machine-Guided, Human-Initiated Explanatory Interactive Learning [9.887110107270196]
Recent work has demonstrated the promise of combining local explanations with active learning for understanding and supervising black-box models.
Here we show that, under specific conditions, these algorithms may misrepresent the quality of the model being learned.
We address this narrative bias by introducing explanatory guided learning.
arXiv Detail & Related papers (2020-07-20T11:51:31Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.