Motivating explanations in Bayesian networks using MAP-independence
- URL: http://arxiv.org/abs/2208.03121v1
- Date: Fri, 5 Aug 2022 12:26:54 GMT
- Title: Motivating explanations in Bayesian networks using MAP-independence
- Authors: Johan Kwisthout
- Abstract summary: In Bayesian networks a diagnosis or classification is typically formalized as the computation of the most probable joint value assignment to the hypothesis variables.
In this paper we introduce a new concept, MAP- independence, which tries to capture this notion of relevance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In decision support systems the motivation and justification of the system's
diagnosis or classification is crucial for the acceptance of the system by the
human user. In Bayesian networks a diagnosis or classification is typically
formalized as the computation of the most probable joint value assignment to
the hypothesis variables, given the observed values of the evidence variables
(generally known as the MAP problem). While solving the MAP problem gives the
most probable explanation of the evidence, the computation is a black box as
far as the human user is concerned and it does not give additional insights
that allow the user to appreciate and accept the decision. For example, a user
might want to know to whether an unobserved variable could potentially (upon
observation) impact the explanation, or whether it is irrelevant in this
aspect. In this paper we introduce a new concept, MAP- independence, which
tries to capture this notion of relevance, and explore its role towards a
potential justification of an inference to the best explanation. We formalize
several computational problems based on this concept and assess their
computational complexity.
Related papers
- Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Explaining $\mathcal{ELH}$ Concept Descriptions through Counterfactual
Reasoning [3.5323691899538128]
An intrinsically transparent way to do classification is by using concepts in description logics.
One solution is to employ counterfactuals to answer the question, How must feature values be changed to obtain a different classification?''
arXiv Detail & Related papers (2023-01-12T16:06:06Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - ADVISE: ADaptive Feature Relevance and VISual Explanations for
Convolutional Neural Networks [0.745554610293091]
We introduce ADVISE, a new explainability method that quantifies and leverages the relevance of each unit of the feature map to provide better visual explanations.
We extensively evaluate our idea in the image classification task using AlexNet, VGG16, ResNet50, and Xception pretrained on ImageNet.
Our experiments further show that ADVISE fulfils the sensitivity and implementation independence axioms while passing the sanity checks.
arXiv Detail & Related papers (2022-03-02T18:16:57Z) - A Bayesian Framework for Information-Theoretic Probing [51.98576673620385]
We argue that probing should be seen as approximating a mutual information.
This led to the rather unintuitive conclusion that representations encode exactly the same information about a target task as the original sentences.
This paper proposes a new framework to measure what we term Bayesian mutual information.
arXiv Detail & Related papers (2021-09-08T18:08:36Z) - Explaining Black-Box Algorithms Using Probabilistic Contrastive
Counterfactuals [7.727206277914709]
We propose a principled causality-based approach for explaining black-box decision-making systems.
We show how such counterfactuals can quantify the direct and indirect influences of a variable on decisions made by an algorithm.
We show how such counterfactuals can provide actionable recourse for individuals negatively affected by the algorithm's decision.
arXiv Detail & Related papers (2021-03-22T16:20:21Z) - A Taxonomy of Explainable Bayesian Networks [0.0]
We introduce a taxonomy of explainability in Bayesian networks.
We extend the existing categorisation of explainability in the model, reasoning or evidence to include explanation of decisions.
arXiv Detail & Related papers (2021-01-28T07:29:57Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.