Explaining Data-Driven Decisions made by AI Systems: The Counterfactual
Approach
- URL: http://arxiv.org/abs/2001.07417v5
- Date: Wed, 13 Oct 2021 07:50:39 GMT
- Title: Explaining Data-Driven Decisions made by AI Systems: The Counterfactual
Approach
- Authors: Carlos Fern\'andez-Lor\'ia, Foster Provost, Xintian Han
- Abstract summary: We consider an explanation as a set of the system's data inputs that causally drives the decision.
We show that features that have a large importance weight for a model prediction may not affect the corresponding decision.
- Score: 11.871523410051527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We examine counterfactual explanations for explaining the decisions made by
model-based AI systems. The counterfactual approach we consider defines an
explanation as a set of the system's data inputs that causally drives the
decision (i.e., changing the inputs in the set changes the decision) and is
irreducible (i.e., changing any subset of the inputs does not change the
decision). We (1) demonstrate how this framework may be used to provide
explanations for decisions made by general, data-driven AI systems that may
incorporate features with arbitrary data types and multiple predictive models,
and (2) propose a heuristic procedure to find the most useful explanations
depending on the context. We then contrast counterfactual explanations with
methods that explain model predictions by weighting features according to their
importance (e.g., SHAP, LIME) and present two fundamental reasons why we should
carefully consider whether importance-weight explanations are well-suited to
explain system decisions. Specifically, we show that (i) features that have a
large importance weight for a model prediction may not affect the corresponding
decision, and (ii) importance weights are insufficient to communicate whether
and how features influence decisions. We demonstrate this with several concise
examples and three detailed case studies that compare the counterfactual
approach with SHAP to illustrate various conditions under which counterfactual
explanations explain data-driven decisions better than importance weights.
Related papers
- Hard to Explain: On the Computational Hardness of In-Distribution Model Interpretation [0.9558392439655016]
The ability to interpret Machine Learning (ML) models is becoming increasingly essential.
Recent work has demonstrated that it is possible to formally assess interpretability by studying the computational complexity of explaining the decisions of various models.
arXiv Detail & Related papers (2024-08-07T17:20:52Z) - An AI Architecture with the Capability to Explain Recognition Results [0.0]
This research focuses on the importance of metrics to explainability and contributes two methods yielding performance gains.
The first method introduces a combination of explainable and unexplainable flows, proposing a metric to characterize explainability of a decision.
The second method compares classic metrics for estimating the effectiveness of neural networks in the system, posing a new metric as the leading performer.
arXiv Detail & Related papers (2024-06-13T02:00:13Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - DECE: Decision Explorer with Counterfactual Explanations for Machine
Learning Models [36.50754934147469]
We exploit the potential of counterfactual explanations to understand and explore the behavior of machine learning models.
We design DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets.
arXiv Detail & Related papers (2020-08-19T09:44:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.