An Incremental Explanation of Inference in Hybrid Bayesian Networks for
Increasing Model Trustworthiness and Supporting Clinical Decision Making
- URL: http://arxiv.org/abs/2003.02599v2
- Date: Fri, 6 Mar 2020 10:33:23 GMT
- Title: An Incremental Explanation of Inference in Hybrid Bayesian Networks for
Increasing Model Trustworthiness and Supporting Clinical Decision Making
- Authors: Evangelia Kyrimi, Somayyeh Mossadegh, Nigel Tai, William Marsh
- Abstract summary: Clinicians are more likely to use a model if they can understand and trust its predictions.
A Bayesian network (BN) model has the advantage that it is not a black-box and its reasoning can be explained.
We propose an incremental explanation of inference that can be applied to hybrid BNs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Various AI models are increasingly being considered as part of clinical
decision-support tools. However, the trustworthiness of such models is rarely
considered. Clinicians are more likely to use a model if they can understand
and trust its predictions. Key to this is if its underlying reasoning can be
explained. A Bayesian network (BN) model has the advantage that it is not a
black-box and its reasoning can be explained. In this paper, we propose an
incremental explanation of inference that can be applied to hybrid BNs, i.e.
those that contain both discrete and continuous nodes. The key questions that
we answer are: (1) which important evidence supports or contradicts the
prediction, and (2) through which intermediate variables does the information
flow. The explanation is illustrated using a real clinical case study. A small
evaluation study is also conducted.
Related papers
- Counterfactual explainability of black-box prediction models [4.14360329494344]
We propose a new notion called counterfactual explainability for black-box prediction models.
Counterfactual explainability has three key advantages.
arXiv Detail & Related papers (2024-11-03T16:29:09Z) - Bayesian Kolmogorov Arnold Networks (Bayesian_KANs): A Probabilistic Approach to Enhance Accuracy and Interpretability [1.90365714903665]
This study presents a novel framework called Bayesian Kolmogorov Arnold Networks (BKANs)
BKANs combines the expressive capacity of Kolmogorov Arnold Networks with Bayesian inference.
Our method provides useful insights into prediction confidence and decision boundaries and outperforms traditional deep learning models in terms of prediction accuracy.
arXiv Detail & Related papers (2024-08-05T10:38:34Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - This changes to that : Combining causal and non-causal explanations to
generate disease progression in capsule endoscopy [5.287156503763459]
We propose a unified explanation approach that combines both model-dependent and agnostic explanations to produce an explanation set.
The generated explanations are consistent in the neighborhood of a sample and can highlight causal relationships between image content and the outcome.
arXiv Detail & Related papers (2022-12-05T12:46:19Z) - UKP-SQuARE v2 Explainability and Adversarial Attacks for Trustworthy QA [47.8796570442486]
Question Answering systems are increasingly deployed in applications where they support real-world decisions.
Inherently interpretable models or post hoc explainability methods can help users to comprehend how a model arrives at its prediction.
We introduce SQuARE v2, the new version of SQuARE, to provide an explainability infrastructure for comparing models.
arXiv Detail & Related papers (2022-08-19T13:01:01Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - A Taxonomy of Explainable Bayesian Networks [0.0]
We introduce a taxonomy of explainability in Bayesian networks.
We extend the existing categorisation of explainability in the model, reasoning or evidence to include explanation of decisions.
arXiv Detail & Related papers (2021-01-28T07:29:57Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.