Convex Density Constraints for Computing Plausible Counterfactual
Explanations
- URL: http://arxiv.org/abs/2002.04862v2
- Date: Mon, 3 Aug 2020 08:14:22 GMT
- Title: Convex Density Constraints for Computing Plausible Counterfactual
Explanations
- Authors: Andr\'e Artelt, Barbara Hammer
- Abstract summary: Counterfactual explanations are considered as one of the most popular techniques to explain a specific decision of a model.
We build upon recent work and propose and study a formal definition of plausible counterfactual explanations.
In particular, we investigate how to use density estimators for enforcing plausibility and feasibility of counterfactual explanations.
- Score: 8.132423340684568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing deployment of machine learning as well as legal regulations
such as EU's GDPR cause a need for user-friendly explanations of decisions
proposed by machine learning models. Counterfactual explanations are considered
as one of the most popular techniques to explain a specific decision of a
model. While the computation of "arbitrary" counterfactual explanations is well
studied, it is still an open research problem how to efficiently compute
plausible and feasible counterfactual explanations. We build upon recent work
and propose and study a formal definition of plausible counterfactual
explanations. In particular, we investigate how to use density estimators for
enforcing plausibility and feasibility of counterfactual explanations. For the
purpose of efficient computations, we propose convex density constraints that
ensure that the resulting counterfactual is located in a region of the data
space of high density.
Related papers
- From Model Explanation to Data Misinterpretation: Uncovering the Pitfalls of Post Hoc Explainers in Business Research [3.7209396288545338]
We find a growing trend in business research where post hoc explanations are used to draw inferences about the data.
The ultimate goal of this paper is to caution business researchers against translating post hoc explanations of machine learning models into potentially false insights and understanding of data.
arXiv Detail & Related papers (2024-08-30T03:22:35Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - From Robustness to Explainability and Back Again [0.685316573653194]
The paper addresses the limitation of scalability of formal explainability, and proposes novel algorithms for computing formal explanations.
The proposed algorithm computes explanations by answering instead a number of robustness queries, and such that the number of such queries is at most linear on the number of features.
The experiments validate the practical efficiency of the proposed approach.
arXiv Detail & Related papers (2023-06-05T17:21:05Z) - STEERING: Stein Information Directed Exploration for Model-Based
Reinforcement Learning [111.75423966239092]
We propose an exploration incentive in terms of the integral probability metric (IPM) between a current estimate of the transition model and the unknown optimal.
Based on KSD, we develop a novel algorithm algo: textbfSTEin information dirtextbfEcted exploration for model-based textbfReinforcement LearntextbfING.
arXiv Detail & Related papers (2023-01-28T00:49:28Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Validation Diagnostics for SBI algorithms based on Normalizing Flows [55.41644538483948]
This work proposes easy to interpret validation diagnostics for multi-dimensional conditional (posterior) density estimators based on NF.
It also offers theoretical guarantees based on results of local consistency.
This work should help the design of better specified models or drive the development of novel SBI-algorithms.
arXiv Detail & Related papers (2022-11-17T15:48:06Z) - Convex optimization for actionable \& plausible counterfactual
explanations [9.104557591459283]
Transparency is an essential requirement of machine learning based decision making systems that are deployed in real world.
Counterfactual explanations are a prominent instance of particular intuitive explanations of decision making systems.
In this work we enhance our previous work on convex modeling for computing counterfactual explanations by a mechanism for ensuring actionability and plausibility.
arXiv Detail & Related papers (2021-05-17T06:33:58Z) - Efficient computation of contrastive explanations [8.132423340684568]
We study the relation of contrastive and counterfactual explanations.
We propose a 2-phase algorithm for efficiently computing (plausible) positives of many standard machine learning models.
arXiv Detail & Related papers (2020-10-06T11:50:28Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - SCOUT: Self-aware Discriminant Counterfactual Explanations [78.79534272979305]
The problem of counterfactual visual explanations is considered.
A new family of discriminant explanations is introduced.
The resulting counterfactual explanations are optimization free and thus much faster than previous methods.
arXiv Detail & Related papers (2020-04-16T17:05:49Z) - Consumer-Driven Explanations for Machine Learning Decisions: An
Empirical Study of Robustness [35.520178007455556]
This paper builds upon an alternative consumer-driven approach called TED that asks for explanations to be provided in training data, along with target labels.
Experiments are conducted to investigate some practical considerations with TED, including its performance with different classification algorithms.
arXiv Detail & Related papers (2020-01-13T18:45:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.