A Peek Into the Reasoning of Neural Networks: Interpreting with
Structural Visual Concepts
- URL: http://arxiv.org/abs/2105.00290v1
- Date: Sat, 1 May 2021 15:47:42 GMT
- Title: A Peek Into the Reasoning of Neural Networks: Interpreting with
Structural Visual Concepts
- Authors: Yunhao Ge, Yao Xiao, Zhi Xu, Meng Zheng, Srikrishna Karanam, Terrence
Chen, Laurent Itti, Ziyan Wu
- Abstract summary: We propose a framework (VRX) to interpret classification NNs with intuitive structural visual concepts.
By means of knowledge distillation, we show VRX can take a step towards mimicking the reasoning process of NNs.
- Score: 38.215184251799194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite substantial progress in applying neural networks (NN) to a wide
variety of areas, they still largely suffer from a lack of transparency and
interpretability. While recent developments in explainable artificial
intelligence attempt to bridge this gap (e.g., by visualizing the correlation
between input pixels and final outputs), these approaches are limited to
explaining low-level relationships, and crucially, do not provide insights on
error correction. In this work, we propose a framework (VRX) to interpret
classification NNs with intuitive structural visual concepts. Given a trained
classification model, the proposed VRX extracts relevant class-specific visual
concepts and organizes them using structural concept graphs (SCG) based on
pairwise concept relationships. By means of knowledge distillation, we show VRX
can take a step towards mimicking the reasoning process of NNs and provide
logical, concept-level explanations for final model decisions. With extensive
experiments, we empirically show VRX can meaningfully answer "why" and "why
not" questions about the prediction, providing easy-to-understand insights
about the reasoning process. We also show that these insights can potentially
provide guidance on improving NN's performance.
Related papers
- Less is More: Discovering Concise Network Explanations [26.126343100127936]
We introduce Discovering Conceptual Network Explanations (DCNE), a new approach for generating human-comprehensible visual explanations.
Our method automatically finds visual explanations that are critical for discriminating between classes.
DCNE represents a step forward in making neural network decisions accessible and interpretable to humans.
arXiv Detail & Related papers (2024-05-24T06:10:23Z) - Everybody Needs a Little HELP: Explaining Graphs via Hierarchical
Concepts [12.365451175795338]
Graph neural networks (GNNs) have led to breakthroughs in domains such as drug discovery, social network analysis, and travel time estimation.
They lack interpretability which hinders human trust and thereby deployment to settings with high-stakes decisions.
We provide HELP, a novel, inherently interpretable graph pooling approach that reveals how concepts from different GNN layers compose to new ones in later steps.
arXiv Detail & Related papers (2023-11-25T20:06:46Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Dynamic Clue Bottlenecks: Towards Interpretable-by-Design Visual Question Answering [58.64831511644917]
We introduce an interpretable by design model that factors model decisions into intermediate human-legible explanations.
We show that our inherently interpretable system can improve 4.64% over a comparable black-box system in reasoning-focused questions.
arXiv Detail & Related papers (2023-05-24T08:33:15Z) - SUNY: A Visual Interpretation Framework for Convolutional Neural Networks from a Necessary and Sufficient Perspective [23.175703929763888]
This paper presents a causality-driven framework, SUNY, designed to rationalize the explanations toward better human understanding.
Using the CNN model's input features or internal filters as hypothetical causes, SUNY generates explanations by bi-directional quantifications on both the necessary and sufficient perspectives.
arXiv Detail & Related papers (2023-03-01T05:54:52Z) - ADVISE: ADaptive Feature Relevance and VISual Explanations for
Convolutional Neural Networks [0.745554610293091]
We introduce ADVISE, a new explainability method that quantifies and leverages the relevance of each unit of the feature map to provide better visual explanations.
We extensively evaluate our idea in the image classification task using AlexNet, VGG16, ResNet50, and Xception pretrained on ImageNet.
Our experiments further show that ADVISE fulfils the sensitivity and implementation independence axioms while passing the sanity checks.
arXiv Detail & Related papers (2022-03-02T18:16:57Z) - Human-Centered Concept Explanations for Neural Networks [47.71169918421306]
We introduce concept explanations including the class of Concept Activation Vectors (CAV)
We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats.
Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
arXiv Detail & Related papers (2022-02-25T01:27:31Z) - A Chain Graph Interpretation of Real-World Neural Networks [58.78692706974121]
We propose an alternative interpretation that identifies NNs as chain graphs (CGs) and feed-forward as an approximate inference procedure.
The CG interpretation specifies the nature of each NN component within the rich theoretical framework of probabilistic graphical models.
We demonstrate with concrete examples that the CG interpretation can provide novel theoretical support and insights for various NN techniques.
arXiv Detail & Related papers (2020-06-30T14:46:08Z) - Neuro-Symbolic Visual Reasoning: Disentangling "Visual" from "Reasoning" [49.76230210108583]
We propose a framework to isolate and evaluate the reasoning aspect of visual question answering (VQA) separately from its perception.
We also propose a novel top-down calibration technique that allows the model to answer reasoning questions even with imperfect perception.
On the challenging GQA dataset, this framework is used to perform in-depth, disentangled comparisons between well-known VQA models.
arXiv Detail & Related papers (2020-06-20T08:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.