SUNY: A Visual Interpretation Framework for Convolutional Neural Networks from a Necessary and Sufficient Perspective
- URL: http://arxiv.org/abs/2303.00244v3
- Date: Mon, 27 May 2024 07:11:49 GMT
- Title: SUNY: A Visual Interpretation Framework for Convolutional Neural Networks from a Necessary and Sufficient Perspective
- Authors: Xiwei Xuan, Ziquan Deng, Hsuan-Tien Lin, Zhaodan Kong, Kwan-Liu Ma,
- Abstract summary: This paper presents a causality-driven framework, SUNY, designed to rationalize the explanations toward better human understanding.
Using the CNN model's input features or internal filters as hypothetical causes, SUNY generates explanations by bi-directional quantifications on both the necessary and sufficient perspectives.
- Score: 23.175703929763888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Researchers have proposed various methods for visually interpreting the Convolutional Neural Network (CNN) via saliency maps, which include Class-Activation-Map (CAM) based approaches as a leading family. However, in terms of the internal design logic, existing CAM-based approaches often overlook the causal perspective that answers the core "why" question to help humans understand the explanation. Additionally, current CNN explanations lack the consideration of both necessity and sufficiency, two complementary sides of a desirable explanation. This paper presents a causality-driven framework, SUNY, designed to rationalize the explanations toward better human understanding. Using the CNN model's input features or internal filters as hypothetical causes, SUNY generates explanations by bi-directional quantifications on both the necessary and sufficient perspectives. Extensive evaluations justify that SUNY not only produces more informative and convincing explanations from the angles of necessity and sufficiency, but also achieves performances competitive to other approaches across different CNN architectures over large-scale datasets, including ILSVRC2012 and CUB-200-2011.
Related papers
- Dynamic Clue Bottlenecks: Towards Interpretable-by-Design Visual Question Answering [58.64831511644917]
We introduce an interpretable by design model that factors model decisions into intermediate human-legible explanations.
We show that our inherently interpretable system can improve 4.64% over a comparable black-box system in reasoning-focused questions.
arXiv Detail & Related papers (2023-05-24T08:33:15Z) - FlowX: Towards Explainable Graph Neural Networks via Message Flows [59.025023020402365]
We investigate the explainability of graph neural networks (GNNs) as a step toward elucidating their working mechanisms.
We propose a novel method here, known as FlowX, to explain GNNs by identifying important message flows.
We then propose an information-controlled learning algorithm to train flow scores toward diverse explanation targets.
arXiv Detail & Related papers (2022-06-26T22:48:15Z) - On Consistency in Graph Neural Network Interpretation [34.25952902469481]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, but most of them formalize this task by searching the minimal subgraph.
We propose a simple yet effective countermeasure by aligning embeddings.
arXiv Detail & Related papers (2022-05-27T02:58:07Z) - Learning and Evaluating Graph Neural Network Explanations based on
Counterfactual and Factual Reasoning [46.20269166675735]
Graph Neural Networks (GNNs) have shown great advantages on learning representations for structural data.
In this paper, we take insights of Counterfactual and Factual (CF2) reasoning from causal inference theory, to solve both the learning and evaluation problems.
For quantitatively evaluating the generated explanations without the requirement of ground-truth, we design metrics based on Counterfactual and Factual reasoning.
arXiv Detail & Related papers (2022-02-17T18:30:45Z) - A Peek Into the Reasoning of Neural Networks: Interpreting with
Structural Visual Concepts [38.215184251799194]
We propose a framework (VRX) to interpret classification NNs with intuitive structural visual concepts.
By means of knowledge distillation, we show VRX can take a step towards mimicking the reasoning process of NNs.
arXiv Detail & Related papers (2021-05-01T15:47:42Z) - Correcting Classification: A Bayesian Framework Using Explanation
Feedback to Improve Classification Abilities [2.0931163605360115]
Explanations are social, meaning they are a transfer of knowledge through interactions.
We overcome these difficulties by training a Bayesian convolutional neural network (CNN) that uses explanation feedback.
Our proposed method utilizes this feedback for fine-tuning to correct the model such that the explanations and classifications improve.
arXiv Detail & Related papers (2021-04-29T13:59:21Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Simplifying the explanation of deep neural networks with sufficient and
necessary feature-sets: case of text classification [0.0]
Deep neural networks (DNN) have demonstrated impressive performances solving a wide range of problems in various domains such as medicine, finance, law, etc.
Despite their great performances, they have long been considered as black-box systems, providing good results without being able to explain them.
This article proposes a method to simplify the prediction explanation of One-Dimensional (1D) Convolutional Neural Networks (CNN) by identifying sufficient and necessary features-sets.
arXiv Detail & Related papers (2020-10-08T02:01:21Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - Obtaining Faithful Interpretations from Compositional Neural Networks [72.41100663462191]
We evaluate the intermediate outputs of NMNs on NLVR2 and DROP datasets.
We find that the intermediate outputs differ from the expected output, illustrating that the network structure does not provide a faithful explanation of model behaviour.
arXiv Detail & Related papers (2020-05-02T06:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.