Efficient computation of contrastive explanations
- URL: http://arxiv.org/abs/2010.02647v2
- Date: Mon, 4 Jan 2021 10:09:40 GMT
- Title: Efficient computation of contrastive explanations
- Authors: Andr\'e Artelt and Barbara Hammer
- Abstract summary: We study the relation of contrastive and counterfactual explanations.
We propose a 2-phase algorithm for efficiently computing (plausible) positives of many standard machine learning models.
- Score: 8.132423340684568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing deployment of machine learning systems in practice,
transparency and explainability have become serious issues. Contrastive
explanations are considered to be useful and intuitive, in particular when it
comes to explaining decisions to lay people, since they mimic the way in which
humans explain. Yet, so far, comparably little research has addressed
computationally feasible technologies, which allow guarantees on uniqueness and
optimality of the explanation and which enable an easy incorporation of
additional constraints. Here, we will focus on specific types of models rather
than black-box technologies. We study the relation of contrastive and
counterfactual explanations and propose mathematical formalizations as well as
a 2-phase algorithm for efficiently computing (plausible) pertinent positives
of many standard machine learning models.
Related papers
- T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients [5.946429628497358]
We introduce T-Explainer, a novel local additive attribution explainer based on Taylor expansion.
It has desirable properties, such as local accuracy and consistency, making T-Explainer stable over multiple runs.
arXiv Detail & Related papers (2024-04-25T10:40:49Z) - Even-if Explanations: Formal Foundations, Priorities and Complexity [18.126159829450028]
We show that both linear and tree-based models are strictly more interpretable than neural networks.
We introduce a preference-based framework that enables users to personalize explanations based on their preferences.
arXiv Detail & Related papers (2024-01-17T11:38:58Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - Helpful, Misleading or Confusing: How Humans Perceive Fundamental
Building Blocks of Artificial Intelligence Explanations [11.667611038005552]
We take a step back from sophisticated predictive algorithms and look into explainability of simple decision-making models.
We aim to assess how people perceive comprehensibility of their different representations.
This allows us to capture how diverse stakeholders judge intelligibility of fundamental concepts that more elaborate artificial intelligence explanations are built from.
arXiv Detail & Related papers (2023-03-02T03:15:35Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - What and How of Machine Learning Transparency: Building Bespoke
Explainability Tools with Interoperable Algorithmic Components [77.87794937143511]
This paper introduces a collection of hands-on training materials for explaining data-driven predictive models.
These resources cover the three core building blocks of this technique: interpretable representation composition, data sampling and explanation generation.
arXiv Detail & Related papers (2022-09-08T13:33:25Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Convex optimization for actionable \& plausible counterfactual
explanations [9.104557591459283]
Transparency is an essential requirement of machine learning based decision making systems that are deployed in real world.
Counterfactual explanations are a prominent instance of particular intuitive explanations of decision making systems.
In this work we enhance our previous work on convex modeling for computing counterfactual explanations by a mechanism for ensuring actionability and plausibility.
arXiv Detail & Related papers (2021-05-17T06:33:58Z) - Explanations of Black-Box Model Predictions by Contextual Importance and
Utility [1.7188280334580195]
We present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations easily understandable by experts as well as novice users.
This method explains the prediction results without transforming the model into an interpretable one.
We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation.
arXiv Detail & Related papers (2020-05-30T06:49:50Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.