CFGs: Causality Constrained Counterfactual Explanations using goal-directed ASP
- URL: http://arxiv.org/abs/2405.15956v1
- Date: Fri, 24 May 2024 21:47:58 GMT
- Title: CFGs: Causality Constrained Counterfactual Explanations using goal-directed ASP
- Authors: Sopam Dasgupta, JoaquĆn Arias, Elmer Salazar, Gopal Gupta,
- Abstract summary: We present the framework CFGs, CounterFactual Generation with s(CASP), which utilizes the goal-directed Answer Set Programming (ASP) system s(CASP) to automatically generate counterfactual explanations.
We show how CFGs navigates between these worlds, namely, go from our initial state where we obtain an undesired outcome to the imagined goal state where we obtain the desired decision.
- Score: 1.5749416770494706
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine learning models that automate decision-making are increasingly used in consequential areas such as loan approvals, pretrial bail approval, and hiring. Unfortunately, most of these models are black boxes, i.e., they are unable to reveal how they reach these prediction decisions. A need for transparency demands justification for such predictions. An affected individual might also desire explanations to understand why a decision was made. Ethical and legal considerations require informing the individual of changes in the input attribute (s) that could be made to produce a desirable outcome. Our work focuses on the latter problem of generating counterfactual explanations by considering the causal dependencies between features. In this paper, we present the framework CFGs, CounterFactual Generation with s(CASP), which utilizes the goal-directed Answer Set Programming (ASP) system s(CASP) to automatically generate counterfactual explanations from models generated by rule-based machine learning algorithms in particular. We benchmark CFGs with the FOLD-SE model. Reaching the counterfactual state from the initial state is planned and achieved using a series of interventions. To validate our proposal, we show how counterfactual explanations are computed and justified by imagining worlds where some or all factual assumptions are altered/changed. More importantly, we show how CFGs navigates between these worlds, namely, go from our initial state where we obtain an undesired outcome to the imagined goal state where we obtain the desired decision, taking into account the causal relationships among features.
Related papers
- CoGS: Model Agnostic Causality Constrained Counterfactual Explanations using goal-directed ASP [1.5749416770494706]
CoGS is a model-agnostic framework capable of generating counterfactual explanations for classification models.
CoGS offers interpretable and actionable explanations of the changes required to achieve the desired outcome.
arXiv Detail & Related papers (2024-10-30T00:43:01Z) - CoGS: Causality Constrained Counterfactual Explanations using goal-directed ASP [1.5749416770494706]
We present the CoGS (Counterfactual Generation with s(CASP)) framework to generate counterfactuals from rule-based machine learning models.
CoGS computes realistic and causally consistent changes to attribute values taking causal dependencies between them into account.
It finds a path from an undesired outcome to a desired one using counterfactuals.
arXiv Detail & Related papers (2024-07-11T04:50:51Z) - Counterfactual Generation with Answer Set Programming [2.249916681499244]
We show how counterfactual explanations are computed and justified by imagining worlds where some or all factual assumptions are altered/changed.
In our framework, we show how we can navigate between these worlds, namely, go from our original world/scenario where we obtain an undesired outcome to the imagined world/scenario where we obtain a desired/favourable outcome.
arXiv Detail & Related papers (2024-02-06T20:39:49Z) - Counterfactual Explanation Generation with s(CASP) [2.249916681499244]
Machine learning models that automate decision-making are increasingly being used in consequential areas such as loan approvals, pretrial bail, hiring, and many more.
Unfortunately, most of these models are black-boxes, i.e., they are unable to reveal how they reach these prediction decisions.
This paper focuses on the latter problem of automatically generating counterfactual explanations.
arXiv Detail & Related papers (2023-10-23T02:05:42Z) - VCNet: A self-explaining model for realistic counterfactual generation [52.77024349608834]
Counterfactual explanation is a class of methods to make local explanations of machine learning decisions.
We present VCNet-Variational Counter Net, a model architecture that combines a predictor and a counterfactual generator.
We show that VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem.
arXiv Detail & Related papers (2022-12-21T08:45:32Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - CARE: Coherent Actionable Recourse based on Sound Counterfactual
Explanations [0.0]
This paper introduces CARE, a modular explanation framework that addresses the model- and user-level desiderata.
As a model-agnostic approach, CARE generates multiple, diverse explanations for any black-box model.
arXiv Detail & Related papers (2021-08-18T15:26:59Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.