Convex optimization for actionable \& plausible counterfactual
explanations
- URL: http://arxiv.org/abs/2105.07630v1
- Date: Mon, 17 May 2021 06:33:58 GMT
- Title: Convex optimization for actionable \& plausible counterfactual
explanations
- Authors: Andr\'e Artelt and Barbara Hammer
- Abstract summary: Transparency is an essential requirement of machine learning based decision making systems that are deployed in real world.
Counterfactual explanations are a prominent instance of particular intuitive explanations of decision making systems.
In this work we enhance our previous work on convex modeling for computing counterfactual explanations by a mechanism for ensuring actionability and plausibility.
- Score: 9.104557591459283
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transparency is an essential requirement of machine learning based decision
making systems that are deployed in real world. Often, transparency of a given
system is achieved by providing explanations of the behavior and predictions of
the given system. Counterfactual explanations are a prominent instance of
particular intuitive explanations of decision making systems. While a lot of
different methods for computing counterfactual explanations exist, only very
few work (apart from work from the causality domain) considers feature
dependencies as well as plausibility which might limit the set of possible
counterfactual explanations.
In this work we enhance our previous work on convex modeling for computing
counterfactual explanations by a mechanism for ensuring actionability and
plausibility of the resulting counterfactual explanations.
Related papers
- On Generating Monolithic and Model Reconciling Explanations in Probabilistic Scenarios [46.752418052725126]
We propose a novel framework for generating probabilistic monolithic explanations and model reconciling explanations.
For monolithic explanations, our approach integrates uncertainty by utilizing probabilistic logic to increase the probability of the explanandum.
For model reconciling explanations, we propose a framework that extends the logic-based variant of the model reconciliation problem to account for probabilistic human models.
arXiv Detail & Related papers (2024-05-29T16:07:31Z) - Complementary Explanations for Effective In-Context Learning [77.83124315634386]
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts.
This work aims to better understand the mechanisms by which explanations are used for in-context learning.
arXiv Detail & Related papers (2022-11-25T04:40:47Z) - Computing Rule-Based Explanations by Leveraging Counterfactuals [17.613690272861053]
Rule-based explanations are inefficient to compute, and existing systems sacrifice their quality in order to achieve reasonable performance.
We propose a novel approach to compute rule-based explanations, by using a different type of explanation, Counterfactual Explanations.
We prove a Duality Theorem, showing that rule-based and counterfactual-based explanations are dual to each other, then use this observation to develop an efficient algorithm for computing rule-based explanations.
arXiv Detail & Related papers (2022-10-31T05:20:41Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Robustness and Usefulness in AI Explanation Methods [0.0]
This work summarizes, compares, and contrasts three popular explanation methods: LIME, SmoothGrad, and SHAP.
We evaluate these methods with respect to: robustness, in the sense of sample complexity and stability; understandability, in the sense that provided explanations are consistent with user expectations.
This work concludes that current explanation methods are insufficient; that putting faith in and adopting these methods may actually be worse than simply not using them.
arXiv Detail & Related papers (2022-03-07T21:30:48Z) - Explainers in the Wild: Making Surrogate Explainers Robust to
Distortions through Perception [77.34726150561087]
We propose a methodology to evaluate the effect of distortions in explanations by embedding perceptual distances.
We generate explanations for images in the Imagenet-C dataset and demonstrate how using a perceptual distances in the surrogate explainer creates more coherent explanations for the distorted and reference images.
arXiv Detail & Related papers (2021-02-22T12:38:53Z) - Efficient computation of contrastive explanations [8.132423340684568]
We study the relation of contrastive and counterfactual explanations.
We propose a 2-phase algorithm for efficiently computing (plausible) positives of many standard machine learning models.
arXiv Detail & Related papers (2020-10-06T11:50:28Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Explanations of Black-Box Model Predictions by Contextual Importance and
Utility [1.7188280334580195]
We present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations easily understandable by experts as well as novice users.
This method explains the prediction results without transforming the model into an interpretable one.
We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation.
arXiv Detail & Related papers (2020-05-30T06:49:50Z) - Convex Density Constraints for Computing Plausible Counterfactual
Explanations [8.132423340684568]
Counterfactual explanations are considered as one of the most popular techniques to explain a specific decision of a model.
We build upon recent work and propose and study a formal definition of plausible counterfactual explanations.
In particular, we investigate how to use density estimators for enforcing plausibility and feasibility of counterfactual explanations.
arXiv Detail & Related papers (2020-02-12T09:23:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.