Local Universal Explainer (LUX) -- a rule-based explainer with factual,
counterfactual and visual explanations
- URL: http://arxiv.org/abs/2310.14894v2
- Date: Fri, 9 Feb 2024 11:51:51 GMT
- Title: Local Universal Explainer (LUX) -- a rule-based explainer with factual,
counterfactual and visual explanations
- Authors: Szymon Bobek, Grzegorz J. Nalepa
- Abstract summary: Local Universal Explainer (LUX) is a rule-based explainer that can generate factual, counterfactual and visual explanations.
Our method outperforms the existing approaches in terms of simplicity, global fidelity, representativeness, and consistency.
- Score: 9.065034043031668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable artificial intelligence (XAI) is one of the most intensively
developed area of AI in recent years. It is also one of the most fragmented
with multiple methods that focus on different aspects of explanations. This
makes difficult to obtain the full spectrum of explanation at once in a compact
and consistent way. To address this issue, we present Local Universal Explainer
(LUX), which is a rule-based explainer that can generate factual,
counterfactual and visual explanations. It is based on a modified version of
decision tree algorithms that allows for oblique splits and integration with
feature importance XAI methods such as SHAP or LIME. It does not use data
generation in opposite to other algorithms, but is focused on selecting local
concepts in a form of high-density clusters of real data that have the highest
impact on forming the decision boundary of the explained model. We tested our
method on real and synthetic datasets and compared it with state-of-the-art
rule-based explainers such as LORE, EXPLAN and Anchor. Our method outperforms
the existing approaches in terms of simplicity, global fidelity,
representativeness, and consistency.
Related papers
- Global Human-guided Counterfactual Explanations for Molecular Properties via Reinforcement Learning [49.095065258759895]
We develop a novel global explanation model RLHEX for molecular property prediction.
It aligns the counterfactual explanations with human-defined principles, making the explanations more interpretable and easy for experts to evaluate.
The global explanations produced by RLHEX cover 4.12% more input graphs and reduce the distance between the counterfactual explanation set and the input set by 0.47% on average across three molecular datasets.
arXiv Detail & Related papers (2024-06-19T22:16:40Z) - Enhancing Counterfactual Image Generation Using Mahalanobis Distance with Distribution Preferences in Feature Space [7.00851481261778]
In the realm of Artificial Intelligence (AI), the importance of Explainable Artificial Intelligence (XAI) is increasingly recognized.
One notable single-instance XAI approach is counterfactual explanation, which aids users in comprehending a model's decisions.
This paper introduces a novel method for computing feature importance within the feature space of a black-box model.
arXiv Detail & Related papers (2024-05-31T08:26:53Z) - CAGE: Causality-Aware Shapley Value for Global Explanations [4.017708359820078]
One way to explain AI models is to elucidate the predictive importance of the input features for the AI model.
Inspired by cooperative game theory, Shapley values offer a convenient way for quantifying the feature importance as explanations.
In particular, we introduce a novel sampling procedure for out-coalition features that respects the causal relations of the input features.
arXiv Detail & Related papers (2024-04-17T09:43:54Z) - Efficient GNN Explanation via Learning Removal-based Attribution [56.18049062940675]
We propose a framework of GNN explanation named LeArn Removal-based Attribution (LARA) to address this problem.
The explainer in LARA learns to generate removal-based attribution which enables providing explanations with high fidelity.
In particular, LARA is 3.5 times faster and achieves higher fidelity than the state-of-the-art method on the large dataset ogbn-arxiv.
arXiv Detail & Related papers (2023-06-09T08:54:20Z) - Disagreement amongst counterfactual explanations: How transparency can
be deceptive [0.0]
Counterfactual explanations are increasingly used as Explainable Artificial Intelligence technique.
Not every algorithm creates uniform explanations for the same instance.
Ethical issues arise when malicious agents use this diversity to fairwash an unfair machine learning model.
arXiv Detail & Related papers (2023-04-25T09:15:37Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - Best of both worlds: local and global explanations with
human-understandable concepts [10.155485106226754]
Interpretability techniques aim to provide the rationale behind a model's decision, typically by explaining either an individual prediction or a class of predictions.
We show that our method improves global explanations over TCAV when compared to ground truth, and provides useful insights.
arXiv Detail & Related papers (2021-06-16T09:05:25Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Explaining Predictions by Approximating the Local Decision Boundary [3.60160227126201]
We present a new procedure for local decision boundary approximation (DBA)
We train a variational autoencoder to learn a Euclidean latent space of encoded data representations.
We exploit attribute annotations to map the latent space to attributes that are meaningful to the user.
arXiv Detail & Related papers (2020-06-14T19:12:42Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z) - Explainable Deep Classification Models for Domain Generalization [94.43131722655617]
Explanations are defined as regions of visual evidence upon which a deep classification network makes a decision.
Our training strategy enforces a periodic saliency-based feedback to encourage the model to focus on the image regions that directly correspond to the ground-truth object.
arXiv Detail & Related papers (2020-03-13T22:22:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.