Explaining Bayesian Networks in Natural Language using Factor Arguments. Evaluation in the medical domain
- URL: http://arxiv.org/abs/2410.18060v1
- Date: Wed, 23 Oct 2024 17:33:27 GMT
- Title: Explaining Bayesian Networks in Natural Language using Factor Arguments. Evaluation in the medical domain
- Authors: Jaime Sevilla, Nikolay Babakov, Ehud Reiter, Alberto Bugarin,
- Abstract summary: We introduce the notion of factor argument independence to address the question of defining when arguments should be presented jointly or separately.
We present an algorithm that produces a list of all independent factor arguments ordered by their strength.
Our proposal has been validated in the medical domain through a human-driven evaluation study.
- Score: 5.999262679775618
- License:
- Abstract: In this paper, we propose a model for building natural language explanations for Bayesian Network Reasoning in terms of factor arguments, which are argumentation graphs of flowing evidence, relating the observed evidence to a target variable we want to learn about. We introduce the notion of factor argument independence to address the outstanding question of defining when arguments should be presented jointly or separately and present an algorithm that, starting from the evidence nodes and a target node, produces a list of all independent factor arguments ordered by their strength. Finally, we implemented a scheme to build natural language explanations of Bayesian Reasoning using this approach. Our proposal has been validated in the medical domain through a human-driven evaluation study where we compare the Bayesian Network Reasoning explanations obtained using factor arguments with an alternative explanation method. Evaluation results indicate that our proposed explanation approach is deemed by users as significantly more useful for understanding Bayesian Network Reasoning than another existing explanation method it is compared to.
Related papers
- When factorization meets argumentation: towards argumentative explanations [0.0]
We propose a novel model that combines factorization-based methods with argumentation frameworks (AFs)
Our framework seamlessly incorporates side information, such as user contexts, leading to more accurate predictions.
arXiv Detail & Related papers (2024-05-13T19:16:28Z) - A Unifying Framework for Learning Argumentation Semantics [50.69905074548764]
We present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.
Our framework outperforms existing argumentation solvers, thus opening up new future research directions in the area of formal argumentation and human-machine dialogues.
arXiv Detail & Related papers (2023-10-18T20:18:05Z) - Abductive Commonsense Reasoning Exploiting Mutually Exclusive
Explanations [118.0818807474809]
Abductive reasoning aims to find plausible explanations for an event.
Existing approaches for abductive reasoning in natural language processing often rely on manually generated annotations for supervision.
This work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
arXiv Detail & Related papers (2023-05-24T01:35:10Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Explaining Causal Models with Argumentation: the Case of Bi-variate
Reinforcement [15.947501347927687]
We introduce a conceptualisation for generating argumentation frameworks (AFs) from causal models.
The conceptualisation is based on reinterpreting desirable properties of semantics of AFs as explanation moulds.
We perform a theoretical evaluation of these argumentative explanations, examining whether they satisfy a range of desirable explanatory and argumentative properties.
arXiv Detail & Related papers (2022-05-23T19:39:51Z) - Argumentative Explanations for Pattern-Based Text Classifiers [15.81939090849456]
We focus on explanations for a specific interpretable model, namely pattern-based logistic regression (PLR) for binary text classification.
We propose AXPLR, a novel explanation method using (forms of) computational argumentation to generate explanations.
arXiv Detail & Related papers (2022-05-22T21:16:49Z) - Deconfounding to Explanation Evaluation in Graph Neural Networks [136.73451468551656]
We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
arXiv Detail & Related papers (2022-01-21T18:05:00Z) - Finding, Scoring and Explaining Arguments in Bayesian Networks [0.0]
We define a notion of independent arguments, and propose an algorithm to extract a list of relevant, independent arguments.
To demonstrate the relevance of the arguments, we show how we can use the extracted arguments to approximate message passing.
arXiv Detail & Related papers (2021-11-30T12:41:04Z) - Prompting Contrastive Explanations for Commonsense Reasoning Tasks [74.7346558082693]
Large pretrained language models (PLMs) can achieve near-human performance on commonsense reasoning tasks.
We show how to use these same models to generate human-interpretable evidence.
arXiv Detail & Related papers (2021-06-12T17:06:13Z) - Local Explanation of Dialogue Response Generation [77.68077106724522]
Local explanation of response generation (LERG) is proposed to gain insights into the reasoning process of a generation model.
LERG views the sequence prediction as uncertainty estimation of a human response and then creates explanations by perturbing the input and calculating the certainty change over the human response.
Our results show that our method consistently improves other widely used methods on proposed automatic- and human- evaluation metrics for this new task by 4.4-12.8%.
arXiv Detail & Related papers (2021-06-11T17:58:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.