Computing Rule-Based Explanations by Leveraging Counterfactuals
- URL: http://arxiv.org/abs/2210.17071v1
- Date: Mon, 31 Oct 2022 05:20:41 GMT
- Title: Computing Rule-Based Explanations by Leveraging Counterfactuals
- Authors: Zixuan Geng, Maximilian Schleich, Dan Suciu
- Abstract summary: Rule-based explanations are inefficient to compute, and existing systems sacrifice their quality in order to achieve reasonable performance.
We propose a novel approach to compute rule-based explanations, by using a different type of explanation, Counterfactual Explanations.
We prove a Duality Theorem, showing that rule-based and counterfactual-based explanations are dual to each other, then use this observation to develop an efficient algorithm for computing rule-based explanations.
- Score: 17.613690272861053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sophisticated machine models are increasingly used for high-stakes decisions
in everyday life. There is an urgent need to develop effective explanation
techniques for such automated decisions. Rule-Based Explanations have been
proposed for high-stake decisions like loan applications, because they increase
the users' trust in the decision. However, rule-based explanations are very
inefficient to compute, and existing systems sacrifice their quality in order
to achieve reasonable performance. We propose a novel approach to compute
rule-based explanations, by using a different type of explanation,
Counterfactual Explanations, for which several efficient systems have already
been developed. We prove a Duality Theorem, showing that rule-based and
counterfactual-based explanations are dual to each other, then use this
observation to develop an efficient algorithm for computing rule-based
explanations, which uses the counterfactual-based explanation as an oracle. We
conduct extensive experiments showing that our system computes rule-based
explanations of higher quality, and with the same or better performance, than
two previous systems, MinSetCover and Anchor.
Related papers
- An AI Architecture with the Capability to Explain Recognition Results [0.0]
This research focuses on the importance of metrics to explainability and contributes two methods yielding performance gains.
The first method introduces a combination of explainable and unexplainable flows, proposing a metric to characterize explainability of a decision.
The second method compares classic metrics for estimating the effectiveness of neural networks in the system, posing a new metric as the leading performer.
arXiv Detail & Related papers (2024-06-13T02:00:13Z) - Introducing User Feedback-based Counterfactual Explanations (UFCE) [49.1574468325115]
Counterfactual explanations (CEs) have emerged as a viable solution for generating comprehensible explanations in XAI.
UFCE allows for the inclusion of user constraints to determine the smallest modifications in the subset of actionable features.
UFCE outperforms two well-known CE methods in terms of textitproximity, textitsparsity, and textitfeasibility.
arXiv Detail & Related papers (2024-02-26T20:09:44Z) - Explainability for Machine Learning Models: From Data Adaptability to
User Perception [0.8702432681310401]
This thesis explores the generation of local explanations for already deployed machine learning models.
It aims to identify optimal conditions for producing meaningful explanations considering both data and user requirements.
arXiv Detail & Related papers (2024-02-16T18:44:37Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Complementary Explanations for Effective In-Context Learning [77.83124315634386]
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts.
This work aims to better understand the mechanisms by which explanations are used for in-context learning.
arXiv Detail & Related papers (2022-11-25T04:40:47Z) - Detection Accuracy for Evaluating Compositional Explanations of Units [5.220940151628734]
Two examples of methods that use this approach are Network Dissection and Compositional explanations.
While intuitively, logical forms are more informative than atomic concepts, it is not clear how to quantify this improvement.
We propose to use as evaluation metric the Detection Accuracy, which measures units' consistency of detection of their assigned explanations.
arXiv Detail & Related papers (2021-09-16T08:47:34Z) - Convex optimization for actionable \& plausible counterfactual
explanations [9.104557591459283]
Transparency is an essential requirement of machine learning based decision making systems that are deployed in real world.
Counterfactual explanations are a prominent instance of particular intuitive explanations of decision making systems.
In this work we enhance our previous work on convex modeling for computing counterfactual explanations by a mechanism for ensuring actionability and plausibility.
arXiv Detail & Related papers (2021-05-17T06:33:58Z) - Efficient computation of contrastive explanations [8.132423340684568]
We study the relation of contrastive and counterfactual explanations.
We propose a 2-phase algorithm for efficiently computing (plausible) positives of many standard machine learning models.
arXiv Detail & Related papers (2020-10-06T11:50:28Z) - From Checking to Inference: Actual Causality Computations as
Optimization Problems [79.87179017975235]
We present a novel approach to formulate different notions of causal reasoning, over binary acyclic models, as optimization problems.
We show that both notions are efficiently automated. Using models with more than $8000$ variables, checking is computed in a matter of seconds, with MaxSAT outperforming ILP in many cases.
arXiv Detail & Related papers (2020-06-05T10:56:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.