Counterfactual Explanation Generation with s(CASP)
- URL: http://arxiv.org/abs/2310.14497v1
- Date: Mon, 23 Oct 2023 02:05:42 GMT
- Title: Counterfactual Explanation Generation with s(CASP)
- Authors: Sopam Dasgupta, Farhad Shakerin, Joaqu\'in Arias, Elmer Salazar, Gopal
Gupta
- Abstract summary: Machine learning models that automate decision-making are increasingly being used in consequential areas such as loan approvals, pretrial bail, hiring, and many more.
Unfortunately, most of these models are black-boxes, i.e., they are unable to reveal how they reach these prediction decisions.
This paper focuses on the latter problem of automatically generating counterfactual explanations.
- Score: 2.249916681499244
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine learning models that automate decision-making are increasingly being
used in consequential areas such as loan approvals, pretrial bail, hiring, and
many more. Unfortunately, most of these models are black-boxes, i.e., they are
unable to reveal how they reach these prediction decisions. A need for
transparency demands justification for such predictions. An affected individual
might desire explanations to understand why a decision was made. Ethical and
legal considerations may further require informing the individual of changes in
the input attribute that could be made to produce a desirable outcome. This
paper focuses on the latter problem of automatically generating counterfactual
explanations. Our approach utilizes answer set programming and the s(CASP)
goal-directed ASP system. Answer Set Programming (ASP) is a well-known
knowledge representation and reasoning paradigm. s(CASP) is a goal-directed ASP
system that executes answer-set programs top-down without grounding them. The
query-driven nature of s(CASP) allows us to provide justifications as proof
trees, which makes it possible to analyze the generated counterfactual
explanations. We show how counterfactual explanations are computed and
justified by imagining multiple possible worlds where some or all factual
assumptions are untrue and, more importantly, how we can navigate between these
worlds. We also show how our algorithm can be used to find the Craig
Interpolant for a class of answer set programs for a failing query.
Related papers
- CFGs: Causality Constrained Counterfactual Explanations using goal-directed ASP [1.5749416770494706]
We present the framework CFGs, CounterFactual Generation with s(CASP), which utilizes the goal-directed Answer Set Programming (ASP) system s(CASP) to automatically generate counterfactual explanations.
We show how CFGs navigates between these worlds, namely, go from our initial state where we obtain an undesired outcome to the imagined goal state where we obtain the desired decision.
arXiv Detail & Related papers (2024-05-24T21:47:58Z) - Distilling Reasoning Ability from Large Language Models with Adaptive Thinking [54.047761094420174]
Chain of thought finetuning (cot-finetuning) aims to endow small language models (SLM) with reasoning ability to improve their performance towards specific tasks.
Most existing cot-finetuning methods adopt a pre-thinking mechanism, allowing the SLM to generate a rationale before providing an answer.
This mechanism enables SLM to analyze and think about complex questions, but it also makes answer correctness highly sensitive to minor errors in rationale.
We propose a robust post-thinking mechanism to generate answers before rationale.
arXiv Detail & Related papers (2024-04-14T07:19:27Z) - Counterfactual Generation with Answer Set Programming [2.249916681499244]
We show how counterfactual explanations are computed and justified by imagining worlds where some or all factual assumptions are altered/changed.
In our framework, we show how we can navigate between these worlds, namely, go from our original world/scenario where we obtain an undesired outcome to the imagined world/scenario where we obtain a desired/favourable outcome.
arXiv Detail & Related papers (2024-02-06T20:39:49Z) - Explaining Explanations in Probabilistic Logic Programming [0.0]
In most approaches, the system is considered a black box, making it difficult to generate appropriate explanations.
We consider a setting where models are transparent: probabilistic logic programming (PLP), a paradigm that combines logic programming for knowledge representation and probability to model uncertainty.
We present in this paper an approach to explaining explanations which is based on defining a new query-driven inference mechanism for PLP where proofs are labeled with "choice expressions", a compact and easy to manipulate representation for sets of choices.
arXiv Detail & Related papers (2024-01-30T14:27:37Z) - A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning [73.77088902676306]
We take a closer look at the self-verification abilities of large language models (LLMs) in the context of logical reasoning.
Our main findings suggest that existing LLMs could struggle to identify fallacious reasoning steps accurately and may fall short of guaranteeing the validity of self-verification methods.
arXiv Detail & Related papers (2023-11-14T07:13:10Z) - Open-Set Knowledge-Based Visual Question Answering with Inference Paths [79.55742631375063]
The purpose of Knowledge-Based Visual Question Answering (KB-VQA) is to provide a correct answer to the question with the aid of external knowledge bases.
We propose a new retriever-ranker paradigm of KB-VQA, Graph pATH rankER (GATHER for brevity)
Specifically, it contains graph constructing, pruning, and path-level ranking, which not only retrieves accurate answers but also provides inference paths that explain the reasoning process.
arXiv Detail & Related papers (2023-10-12T09:12:50Z) - Explaining $\mathcal{ELH}$ Concept Descriptions through Counterfactual
Reasoning [3.5323691899538128]
An intrinsically transparent way to do classification is by using concepts in description logics.
One solution is to employ counterfactuals to answer the question, How must feature values be changed to obtain a different classification?''
arXiv Detail & Related papers (2023-01-12T16:06:06Z) - DecAF: Joint Decoding of Answers and Logical Forms for Question
Answering over Knowledge Bases [81.19499764899359]
We propose a novel framework DecAF that jointly generates both logical forms and direct answers.
DecAF achieves new state-of-the-art accuracy on WebQSP, FreebaseQA, and GrailQA benchmarks.
arXiv Detail & Related papers (2022-09-30T19:51:52Z) - Robust Question Answering Through Sub-part Alignment [53.94003466761305]
We model question answering as an alignment problem.
We train our model on SQuAD v1.1 and test it on several adversarial and out-of-domain datasets.
arXiv Detail & Related papers (2020-04-30T09:10:57Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z) - Deceptive AI Explanations: Creation and Detection [3.197020142231916]
We investigate how AI models can be used to create and detect deceptive explanations.
As an empirical evaluation, we focus on text classification and alter the explanations generated by GradCAM.
We evaluate the effect of deceptive explanations on users in an experiment with 200 participants.
arXiv Detail & Related papers (2020-01-21T16:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.