Contrastive Explanations in Neural Networks
- URL: http://arxiv.org/abs/2008.00178v1
- Date: Sat, 1 Aug 2020 05:50:01 GMT
- Title: Contrastive Explanations in Neural Networks
- Authors: Mohit Prabhushankar, Gukyeong Kwon, Dogancan Temel, and Ghassan
AlRegib
- Abstract summary: Current modes of visual explanations answer questions of the form $Why text P?'$.
We propose to constrain these $Why$ questions based on some context $Q$ so that our explanations answer contrastive questions of the form $Why text P, text rather than text Q?'$.
- Score: 17.567849430630872
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual explanations are logical arguments based on visual features that
justify the predictions made by neural networks. Current modes of visual
explanations answer questions of the form $`Why \text{ } P?'$. These $Why$
questions operate under broad contexts thereby providing answers that are
irrelevant in some cases. We propose to constrain these $Why$ questions based
on some context $Q$ so that our explanations answer contrastive questions of
the form $`Why \text{ } P, \text{} rather \text{ } than \text{ } Q?'$. In this
paper, we formalize the structure of contrastive visual explanations for neural
networks. We define contrast based on neural networks and propose a methodology
to extract defined contrasts. We then use the extracted contrasts as a plug-in
on top of existing $`Why \text{ } P?'$ techniques, specifically Grad-CAM. We
demonstrate their value in analyzing both networks and data in applications of
large-scale recognition, fine-grained recognition, subsurface seismic analysis,
and image quality assessment.
Related papers
- Explaining the Implicit Neural Canvas: Connecting Pixels to Neurons by Tracing their Contributions [36.41141627989279]
Implicit Neural Representations (INRs) are neural networks trained as a continuous representation of a signal.
Our work is a unified framework for explaining properties of INRs by examining the strength of each neuron's contribution to each output pixel.
arXiv Detail & Related papers (2024-01-18T18:57:40Z) - Why do CNNs excel at feature extraction? A mathematical explanation [53.807657273043446]
We introduce a novel model for image classification, based on feature extraction, that can be used to generate images resembling real-world datasets.
In our proof, we construct piecewise linear functions that detect the presence of features, and show that they can be realized by a convolutional network.
arXiv Detail & Related papers (2023-07-03T10:41:34Z) - Meet You Halfway: Explaining Deep Learning Mysteries [0.0]
We introduce a new conceptual framework attached with a formal description that aims to shed light on the network's behavior.
We clarify: Why do neural networks acquire generalization abilities?
We provide a comprehensive set of experiments that support this new framework, as well as its underlying theory.
arXiv Detail & Related papers (2022-06-09T12:43:10Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - Explanatory Paradigms in Neural Networks [18.32369721322249]
We present a leap-forward expansion to the study of explainability in neural networks by considering explanations as answers to reasoning-based questions.
The answers to these questions are observed correlations, observed counterfactuals, and observed contrastive explanations respectively.
The term observed refers to the specific case of post-hoc explainability, when an explanatory technique explains the decision $P$ after a trained neural network has made the decision $P$.
arXiv Detail & Related papers (2022-02-24T00:22:11Z) - Expressive Explanations of DNNs by Combining Concept Analysis with ILP [0.3867363075280543]
We use inherent features learned by the network to build a global, expressive, verbal explanation of the rationale of a feed-forward convolutional deep neural network (DNN)
We show that our explanation is faithful to the original black-box model.
arXiv Detail & Related papers (2021-05-16T07:00:27Z) - LRTA: A Transparent Neural-Symbolic Reasoning Framework with Modular
Supervision for Visual Question Answering [4.602329567377897]
We propose a transparent neural-symbolic reasoning framework for visual question answering.
It solves the problem step-by-step like humans and provides human-readable form of justification at each step.
Our experiments on GQA dataset show that LRTA outperforms the state-of-the-art model by a large margin.
arXiv Detail & Related papers (2020-11-21T06:39:42Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z) - Towards Understanding Hierarchical Learning: Benefits of Neural
Representations [160.33479656108926]
In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks.
We show that neural representation can achieve improved sample complexities compared with the raw input.
Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
arXiv Detail & Related papers (2020-06-24T02:44:54Z) - Neuro-Symbolic Visual Reasoning: Disentangling "Visual" from "Reasoning" [49.76230210108583]
We propose a framework to isolate and evaluate the reasoning aspect of visual question answering (VQA) separately from its perception.
We also propose a novel top-down calibration technique that allows the model to answer reasoning questions even with imperfect perception.
On the challenging GQA dataset, this framework is used to perform in-depth, disentangled comparisons between well-known VQA models.
arXiv Detail & Related papers (2020-06-20T08:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.