Counterfactual Generation with Answer Set Programming
- URL: http://arxiv.org/abs/2402.04382v1
- Date: Tue, 6 Feb 2024 20:39:49 GMT
- Title: Counterfactual Generation with Answer Set Programming
- Authors: Sopam Dasgupta, Farhad Shakerin, Joaqu\'in Arias, Elmer Salazar, Gopal
Gupta
- Abstract summary: We show how counterfactual explanations are computed and justified by imagining worlds where some or all factual assumptions are altered/changed.
In our framework, we show how we can navigate between these worlds, namely, go from our original world/scenario where we obtain an undesired outcome to the imagined world/scenario where we obtain a desired/favourable outcome.
- Score: 2.249916681499244
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine learning models that automate decision-making are increasingly being
used in consequential areas such as loan approvals, pretrial bail approval,
hiring, and many more. Unfortunately, most of these models are black-boxes,
i.e., they are unable to reveal how they reach these prediction decisions. A
need for transparency demands justification for such predictions. An affected
individual might also desire explanations to understand why a decision was
made. Ethical and legal considerations may further require informing the
individual of changes in the input attribute that could be made to produce a
desirable outcome. This paper focuses on the latter problem of automatically
generating counterfactual explanations. We propose a framework Counterfactual
Generation with s(CASP) (CFGS) that utilizes answer set programming (ASP) and
the s(CASP) goal-directed ASP system to automatically generate counterfactual
explanations from rules generated by rule-based machine learning (RBML)
algorithms. In our framework, we show how counterfactual explanations are
computed and justified by imagining worlds where some or all factual
assumptions are altered/changed. More importantly, we show how we can navigate
between these worlds, namely, go from our original world/scenario where we
obtain an undesired outcome to the imagined world/scenario where we obtain a
desired/favourable outcome.
Related papers
- CoGS: Causality Constrained Counterfactual Explanations using goal-directed ASP [1.5749416770494706]
We present the CoGS (Counterfactual Generation with s(CASP)) framework to generate counterfactuals from rule-based machine learning models.
CoGS computes realistic and causally consistent changes to attribute values taking causal dependencies between them into account.
It finds a path from an undesired outcome to a desired one using counterfactuals.
arXiv Detail & Related papers (2024-07-11T04:50:51Z) - CFGs: Causality Constrained Counterfactual Explanations using goal-directed ASP [1.5749416770494706]
We present the framework CFGs, CounterFactual Generation with s(CASP), which utilizes the goal-directed Answer Set Programming (ASP) system s(CASP) to automatically generate counterfactual explanations.
We show how CFGs navigates between these worlds, namely, go from our initial state where we obtain an undesired outcome to the imagined goal state where we obtain the desired decision.
arXiv Detail & Related papers (2024-05-24T21:47:58Z) - Distilling Reasoning Ability from Large Language Models with Adaptive Thinking [54.047761094420174]
Chain of thought finetuning (cot-finetuning) aims to endow small language models (SLM) with reasoning ability to improve their performance towards specific tasks.
Most existing cot-finetuning methods adopt a pre-thinking mechanism, allowing the SLM to generate a rationale before providing an answer.
This mechanism enables SLM to analyze and think about complex questions, but it also makes answer correctness highly sensitive to minor errors in rationale.
We propose a robust post-thinking mechanism to generate answers before rationale.
arXiv Detail & Related papers (2024-04-14T07:19:27Z) - A Hypothesis-Driven Framework for the Analysis of Self-Rationalising
Models [0.8702432681310401]
We use a Bayesian network to implement a hypothesis about how a task is solved.
The resulting models do not exhibit a strong similarity to GPT-3.5.
We discuss the implications of this as well as the framework's potential to approximate LLM decisions better in future work.
arXiv Detail & Related papers (2024-02-07T12:26:12Z) - Counterfactual Explanation Generation with s(CASP) [2.249916681499244]
Machine learning models that automate decision-making are increasingly being used in consequential areas such as loan approvals, pretrial bail, hiring, and many more.
Unfortunately, most of these models are black-boxes, i.e., they are unable to reveal how they reach these prediction decisions.
This paper focuses on the latter problem of automatically generating counterfactual explanations.
arXiv Detail & Related papers (2023-10-23T02:05:42Z) - Explaining $\mathcal{ELH}$ Concept Descriptions through Counterfactual
Reasoning [3.5323691899538128]
An intrinsically transparent way to do classification is by using concepts in description logics.
One solution is to employ counterfactuals to answer the question, How must feature values be changed to obtain a different classification?''
arXiv Detail & Related papers (2023-01-12T16:06:06Z) - VCNet: A self-explaining model for realistic counterfactual generation [52.77024349608834]
Counterfactual explanation is a class of methods to make local explanations of machine learning decisions.
We present VCNet-Variational Counter Net, a model architecture that combines a predictor and a counterfactual generator.
We show that VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem.
arXiv Detail & Related papers (2022-12-21T08:45:32Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Bayesian Inference Forgetting [82.6681466124663]
The right to be forgotten has been legislated in many countries but the enforcement in machine learning would cause unbearable costs.
This paper proposes a it Bayesian inference forgetting (BIF) framework to realize the right to be forgotten in Bayesian inference.
arXiv Detail & Related papers (2021-01-16T09:52:51Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.