Evaluation of Local Model-Agnostic Explanations Using Ground Truth
- URL: http://arxiv.org/abs/2106.02488v1
- Date: Fri, 4 Jun 2021 13:47:31 GMT
- Title: Evaluation of Local Model-Agnostic Explanations Using Ground Truth
- Authors: Amir Hossein Akhavan Rahnama, Judith Butepage, Pierre Geurts, Henrik
Bostrom
- Abstract summary: Explanation techniques are commonly evaluated using human-grounded methods.
We propose a functionally-grounded evaluation procedure for local model-agnostic explanation techniques.
- Score: 4.278336455989584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explanation techniques are commonly evaluated using human-grounded methods,
limiting the possibilities for large-scale evaluations and rapid progress in
the development of new techniques. We propose a functionally-grounded
evaluation procedure for local model-agnostic explanation techniques. In our
approach, we generate ground truth for explanations when the black-box model is
Logistic Regression and Gaussian Naive Bayes and compare how similar each
explanation is to the extracted ground truth. In our empirical study,
explanations of Local Interpretable Model-agnostic Explanations (LIME), SHapley
Additive exPlanations (SHAP), and Local Permutation Importance (LPI) are
compared in terms of how similar they are to the extracted ground truth. In the
case of Logistic Regression, we find that the performance of the explanation
techniques is highly dependent on the normalization of the data. In contrast,
Local Permutation Importance outperforms the other techniques on Naive Bayes,
irrespective of normalization. We hope that this work lays the foundation for
further research into functionally-grounded evaluation methods for explanation
techniques.
Related papers
- Evaluating Human Alignment and Model Faithfulness of LLM Rationale [66.75309523854476]
We study how well large language models (LLMs) explain their generations through rationales.
We show that prompting-based methods are less "faithful" than attribution-based explanations.
arXiv Detail & Related papers (2024-06-28T20:06:30Z) - The Blame Problem in Evaluating Local Explanations, and How to Tackle it [0.0]
Bar for developing new explainability techniques is low due to lack of optimal evaluation measures.
Without rigorous measures, it is hard to have concrete evidence of whether new explanation techniques can significantly outperform their predecessors.
arXiv Detail & Related papers (2023-10-05T11:21:49Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Locally Invariant Explanations: Towards Stable and Unidirectional
Explanations through Local Invariant Learning [15.886405745163234]
We propose a model agnostic local explanation method inspired by the invariant risk minimization principle.
Our algorithm is simple and efficient to train, and can ascertain stable input features for local decisions of a black-box without access to side information.
arXiv Detail & Related papers (2022-01-28T14:29:25Z) - Towards Better Model Understanding with Path-Sufficient Explanations [11.517059323883444]
Path-Sufficient Explanations Method (PSEM) is a sequence of sufficient explanations for a given input of strictly decreasing size.
PSEM can be thought to trace the local boundary of the model in a smooth manner, thus providing better intuition about the local model behavior for the specific input.
A user study depicts the strength of the method in communicating the local behavior, where (many) users are able to correctly determine the prediction made by a model.
arXiv Detail & Related papers (2021-09-13T16:06:10Z) - Locally Interpretable Model Agnostic Explanations using Gaussian
Processes [2.9189409618561966]
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique for explaining the prediction of a single instance.
We propose a Gaussian Process (GP) based variation of locally interpretable models.
We demonstrate that the proposed technique is able to generate faithful explanations using much fewer samples as compared to LIME.
arXiv Detail & Related papers (2021-08-16T05:49:01Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z) - Evaluating Explainable AI: Which Algorithmic Explanations Help Users
Predict Model Behavior? [97.77183117452235]
We carry out human subject tests to isolate the effect of algorithmic explanations on model interpretability.
Clear evidence of method effectiveness is found in very few cases.
Our results provide the first reliable and comprehensive estimates of how explanations influence simulatability.
arXiv Detail & Related papers (2020-05-04T20:35:17Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.