LaPLACE: Probabilistic Local Model-Agnostic Causal Explanations
- URL: http://arxiv.org/abs/2310.00570v1
- Date: Sun, 1 Oct 2023 04:09:59 GMT
- Title: LaPLACE: Probabilistic Local Model-Agnostic Causal Explanations
- Authors: Sein Minn
- Abstract summary: We introduce LaPLACE-explainer, designed to provide probabilistic cause-and-effect explanations for machine learning models.
The LaPLACE-Explainer component leverages the concept of a Markov blanket to establish statistical boundaries between relevant and non-relevant features.
Our approach offers causal explanations and outperforms LIME and SHAP in terms of local accuracy and consistency of explained features.
- Score: 1.0370398945228227
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning models have undeniably achieved impressive performance
across a range of applications. However, their often perceived black-box
nature, and lack of transparency in decision-making, have raised concerns about
understanding their predictions. To tackle this challenge, researchers have
developed methods to provide explanations for machine learning models. In this
paper, we introduce LaPLACE-explainer, designed to provide probabilistic
cause-and-effect explanations for any classifier operating on tabular data, in
a human-understandable manner. The LaPLACE-Explainer component leverages the
concept of a Markov blanket to establish statistical boundaries between
relevant and non-relevant features automatically. This approach results in the
automatic generation of optimal feature subsets, serving as explanations for
predictions. Importantly, this eliminates the need to predetermine a fixed
number N of top features as explanations, enhancing the flexibility and
adaptability of our methodology. Through the incorporation of conditional
probabilities, our approach offers probabilistic causal explanations and
outperforms LIME and SHAP (well-known model-agnostic explainers) in terms of
local accuracy and consistency of explained features. LaPLACE's soundness,
consistency, local accuracy, and adaptability are rigorously validated across
various classification models. Furthermore, we demonstrate the practical
utility of these explanations via experiments with both simulated and
real-world datasets. This encompasses addressing trust-related issues, such as
evaluating prediction reliability, facilitating model selection, enhancing
trustworthiness, and identifying fairness-related concerns within classifiers.
Related papers
- Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks [17.238290206236027]
Prototype-Based Self-Explainable Neural Networks offer deep, yet transparent-by-design architecture.
We introduce a probabilistic reformulation of PSENNs, called Prob-PSENN, which replaces point estimates for the prototypes with probability distributions over their values.
Our experiments demonstrate that Prob-PSENNs provide more meaningful and robust explanations than their non-probabilistic counterparts.
arXiv Detail & Related papers (2024-03-20T16:47:28Z) - Gaussian Mixture Models for Affordance Learning using Bayesian Networks [50.18477618198277]
Affordances are fundamental descriptors of relationships between actions, objects and effects.
This paper approaches the problem of an embodied agent exploring the world and learning these affordances autonomously from its sensory experiences.
arXiv Detail & Related papers (2024-02-08T22:05:45Z) - Variational Shapley Network: A Probabilistic Approach to Self-Explaining
Shapley values with Uncertainty Quantification [2.6699011287124366]
Shapley values have emerged as a foundational tool in machine learning (ML) for elucidating model decision-making processes.
We introduce a novel, self-explaining method that simplifies the computation of Shapley values significantly, requiring only a single forward pass.
arXiv Detail & Related papers (2024-02-06T18:09:05Z) - Cross Feature Selection to Eliminate Spurious Interactions and Single
Feature Dominance Explainable Boosting Machines [0.0]
Interpretability is essential for legal, ethical, and practical reasons.
High-performance models can suffer from spurious interactions with redundant features and single-feature dominance.
In this paper, we explore novel approaches to address these issues by utilizing alternate Cross-feature selection, ensemble features and model configuration alteration techniques.
arXiv Detail & Related papers (2023-07-17T13:47:41Z) - CLIMAX: An exploration of Classifier-Based Contrastive Explanations [5.381004207943597]
We propose a novel post-hoc model XAI technique that provides contrastive explanations justifying the classification of a black box.
Our method, which we refer to as CLIMAX, is based on local classifiers.
We show that we achieve better consistency as compared to baselines such as LIME, BayLIME, and SLIME.
arXiv Detail & Related papers (2023-07-02T22:52:58Z) - Evaluating Explainability in Machine Learning Predictions through Explainer-Agnostic Metrics [0.0]
We develop six distinct model-agnostic metrics designed to quantify the extent to which model predictions can be explained.
These metrics measure different aspects of model explainability, ranging from local importance, global importance, and surrogate predictions.
We demonstrate the practical utility of these metrics on classification and regression tasks, and integrate these metrics into an existing Python package for public use.
arXiv Detail & Related papers (2023-02-23T15:28:36Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.