Explanatory Paradigms in Neural Networks
- URL: http://arxiv.org/abs/2202.11838v1
- Date: Thu, 24 Feb 2022 00:22:11 GMT
- Title: Explanatory Paradigms in Neural Networks
- Authors: Ghassan AlRegib, Mohit Prabhushankar
- Abstract summary: We present a leap-forward expansion to the study of explainability in neural networks by considering explanations as answers to reasoning-based questions.
The answers to these questions are observed correlations, observed counterfactuals, and observed contrastive explanations respectively.
The term observed refers to the specific case of post-hoc explainability, when an explanatory technique explains the decision $P$ after a trained neural network has made the decision $P$.
- Score: 18.32369721322249
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this article, we present a leap-forward expansion to the study of
explainability in neural networks by considering explanations as answers to
abstract reasoning-based questions. With $P$ as the prediction from a neural
network, these questions are `Why P?', `What if not P?', and `Why P, rather
than Q?' for a given contrast prediction $Q$. The answers to these questions
are observed correlations, observed counterfactuals, and observed contrastive
explanations respectively. Together, these explanations constitute the
abductive reasoning scheme. We term the three explanatory schemes as observed
explanatory paradigms. The term observed refers to the specific case of
post-hoc explainability, when an explanatory technique explains the decision
$P$ after a trained neural network has made the decision $P$. The primary
advantage of viewing explanations through the lens of abductive reasoning-based
questions is that explanations can be used as reasons while making decisions.
The post-hoc field of explainability, that previously only justified decisions,
becomes active by being involved in the decision making process and providing
limited, but relevant and contextual interventions. The contributions of this
article are: ($i$) realizing explanations as reasoning paradigms, ($ii$)
providing a probabilistic definition of observed explanations and their
completeness, ($iii$) creating a taxonomy for evaluation of explanations, and
($iv$) positioning gradient-based complete explanainability's replicability and
reproducibility across multiple applications and data modalities, ($v$) code
repositories, publicly available at
https://github.com/olivesgatech/Explanatory-Paradigms.
Related papers
- Understanding Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation [110.71955853831707]
We view LMs as deriving new conclusions by aggregating indirect reasoning paths seen at pre-training time.
We formalize the reasoning paths as random walk paths on the knowledge/reasoning graphs.
Experiments and analysis on multiple KG and CoT datasets reveal the effect of training on random walk paths.
arXiv Detail & Related papers (2024-02-05T18:25:51Z) - How Well Do Feature-Additive Explainers Explain Feature-Additive
Predictors? [12.993027779814478]
We ask the question: can popular feature-additive explainers (e.g., LIME, SHAP, SHAPR, MAPLE, and PDP) explain feature-additive predictors?
Herein, we evaluate such explainers on ground truth that is analytically derived from the additive structure of a model.
Our results suggest that all explainers eventually fail to correctly attribute the importance of features, especially when a decision-making process involves feature interactions.
arXiv Detail & Related papers (2023-10-27T21:16:28Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning [59.16962123636579]
This paper proposes a new take on Prolog-based inference engines.
We replace handcrafted rules with a combination of neural language modeling, guided generation, and semi dense retrieval.
Our implementation, NELLIE, is the first system to demonstrate fully interpretable, end-to-end grounded QA.
arXiv Detail & Related papers (2022-09-16T00:54:44Z) - Visual Abductive Reasoning [85.17040703205608]
Abductive reasoning seeks the likeliest possible explanation for partial observations.
We propose a new task and dataset, Visual Abductive Reasoning ( VAR), for examining abductive reasoning ability of machine intelligence in everyday visual situations.
arXiv Detail & Related papers (2022-03-26T10:17:03Z) - Do Explanations Explain? Model Knows Best [39.86131552976105]
It is a mystery which input features contribute to a neural network's output.
We propose a framework for evaluating the explanations using the neural network model itself.
arXiv Detail & Related papers (2022-03-04T12:39:29Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - A Taxonomy of Explainable Bayesian Networks [0.0]
We introduce a taxonomy of explainability in Bayesian networks.
We extend the existing categorisation of explainability in the model, reasoning or evidence to include explanation of decisions.
arXiv Detail & Related papers (2021-01-28T07:29:57Z) - ExplanationLP: Abductive Reasoning for Explainable Science Question
Answering [4.726777092009554]
This paper frames question answering as an abductive reasoning problem.
We construct plausible explanations for each choice and then selecting the candidate with the best explanation as the final answer.
Our system, ExplanationLP, elicits explanations by constructing a weighted graph of relevant facts for each candidate answer.
arXiv Detail & Related papers (2020-10-25T14:49:24Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - Contrastive Explanations in Neural Networks [17.567849430630872]
Current modes of visual explanations answer questions of the form $Why text P?'$.
We propose to constrain these $Why$ questions based on some context $Q$ so that our explanations answer contrastive questions of the form $Why text P, text rather than text Q?'$.
arXiv Detail & Related papers (2020-08-01T05:50:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.