Communicating Uncertainty in Machine Learning Explanations: A
Visualization Analytics Approach for Predictive Process Monitoring
- URL: http://arxiv.org/abs/2304.05736v1
- Date: Wed, 12 Apr 2023 09:44:32 GMT
- Title: Communicating Uncertainty in Machine Learning Explanations: A
Visualization Analytics Approach for Predictive Process Monitoring
- Authors: Nijat Mehdiyev, Maxim Majlatow and Peter Fettke
- Abstract summary: This study explores how model uncertainty can be effectively communicated in global and local post-hoc explanation approaches.
By combining these two research directions, decision-makers can not only justify the plausibility of explanation-driven actionable insights but also validate their reliability.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As data-driven intelligent systems advance, the need for reliable and
transparent decision-making mechanisms has become increasingly important.
Therefore, it is essential to integrate uncertainty quantification and model
explainability approaches to foster trustworthy business and operational
process analytics. This study explores how model uncertainty can be effectively
communicated in global and local post-hoc explanation approaches, such as
Partial Dependence Plots (PDP) and Individual Conditional Expectation (ICE)
plots. In addition, this study examines appropriate visualization analytics
approaches to facilitate such methodological integration. By combining these
two research directions, decision-makers can not only justify the plausibility
of explanation-driven actionable insights but also validate their reliability.
Finally, the study includes expert interviews to assess the suitability of the
proposed approach and designed interface for a real-world predictive process
monitoring problem in the manufacturing domain.
Related papers
- Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Interpretable Concept-Based Memory Reasoning [12.562474638728194]
Concept-based Memory Reasoner (CMR) is a novel CBM designed to provide a human-understandable and provably-verifiable task prediction process.
CMR achieves better accuracy-interpretability trade-offs to state-of-the-art CBMs, discovers logic rules consistent with ground truths, allows for rule interventions, and allows pre-deployment verification.
arXiv Detail & Related papers (2024-07-22T10:32:48Z) - Self-Distilled Disentangled Learning for Counterfactual Prediction [49.84163147971955]
We propose the Self-Distilled Disentanglement framework, known as $SD2$.
Grounded in information theory, it ensures theoretically sound independent disentangled representations without intricate mutual information estimator designs.
Our experiments, conducted on both synthetic and real-world datasets, confirm the effectiveness of our approach.
arXiv Detail & Related papers (2024-06-09T16:58:19Z) - Self-consistent Validation for Machine Learning Electronic Structure [81.54661501506185]
Method integrates machine learning with self-consistent field methods to achieve both low validation cost and interpret-ability.
This, in turn, enables exploration of the model's ability with active learning and instills confidence in its integration into real-world studies.
arXiv Detail & Related papers (2024-02-15T18:41:35Z) - Explaining by Imitating: Understanding Decisions by Interpretable Policy
Learning [72.80902932543474]
Understanding human behavior from observed data is critical for transparency and accountability in decision-making.
Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging.
We propose a data-driven representation of decision-making behavior that inheres transparency by design, accommodates partial observability, and operates completely offline.
arXiv Detail & Related papers (2023-10-28T13:06:14Z) - Quantifying and Explaining Machine Learning Uncertainty in Predictive
Process Monitoring: An Operations Research Perspective [0.0]
This paper introduces a comprehensive, multi-stage machine learning methodology that integrates information systems and artificial intelligence.
The proposed framework adeptly addresses common limitations of existing solutions, such as the neglect of data-driven estimation.
Our approach employs Quantile Regression Forests for generating interval predictions, alongside both local and global variants of SHapley Additive Explanations.
arXiv Detail & Related papers (2023-04-13T11:18:22Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Evaluating Bayesian Model Visualisations [0.39845810840390733]
Probabilistic models inform an increasingly broad range of business and policy decisions ultimately made by people.
Recent algorithmic, computational, and software framework development progress facilitate the proliferation of Bayesian probabilistic models.
While they can empower decision makers to explore complex queries and to perform what-if-style conditioning in theory, suitable visualisations and interactive tools are needed to maximise users' comprehension and rational decision making under uncertainty.
arXiv Detail & Related papers (2022-01-10T19:15:39Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Local Post-Hoc Explanations for Predictive Process Monitoring in
Manufacturing [0.0]
This study proposes an innovative explainable predictive quality analytics solution to facilitate data-driven decision-making in manufacturing.
It combines process mining, machine learning, and explainable artificial intelligence (XAI) methods.
arXiv Detail & Related papers (2020-09-22T13:07:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.