Finding Representative Interpretations on Convolutional Neural Networks
- URL: http://arxiv.org/abs/2108.06384v2
- Date: Tue, 17 Aug 2021 02:41:45 GMT
- Title: Finding Representative Interpretations on Convolutional Neural Networks
- Authors: Peter Cho-Ho Lam, Lingyang Chu, Maxim Torgonskiy, Jian Pei, Yong
Zhang, Lanjun Wang
- Abstract summary: We develop a novel unsupervised approach to produce a highly representative interpretation for a large number of similar images.
We formulate the problem of finding representative interpretations as a co-clustering problem, and convert it into a submodular cost submodular cover problem.
Our experiments demonstrate the excellent performance of our method.
- Score: 43.25913447473829
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interpreting the decision logic behind effective deep convolutional neural
networks (CNN) on images complements the success of deep learning models.
However, the existing methods can only interpret some specific decision logic
on individual or a small number of images. To facilitate human
understandability and generalization ability, it is important to develop
representative interpretations that interpret common decision logics of a CNN
on a large group of similar images, which reveal the common semantics data
contributes to many closely related predictions. In this paper, we develop a
novel unsupervised approach to produce a highly representative interpretation
for a large number of similar images. We formulate the problem of finding
representative interpretations as a co-clustering problem, and convert it into
a submodular cost submodular cover problem based on a sample of the linear
decision boundaries of a CNN. We also present a visualization and similarity
ranking method. Our extensive experiments demonstrate the excellent performance
of our method.
Related papers
- DP-Net: Learning Discriminative Parts for image recognition [4.480595534587716]
DP-Net is a deep architecture with strong interpretation capabilities.
It exploits a pretrained Convolutional Neural Network (CNN) combined with a part-based recognition module.
arXiv Detail & Related papers (2024-04-23T13:42:12Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Adversarial Attacks on the Interpretation of Neuron Activation
Maximization [70.5472799454224]
Activation-maximization approaches are used to interpret and analyze trained deep-learning models.
In this work, we consider the concept of an adversary manipulating a model for the purpose of deceiving the interpretation.
arXiv Detail & Related papers (2023-06-12T19:54:33Z) - SO(2) and O(2) Equivariance in Image Recognition with
Bessel-Convolutional Neural Networks [63.24965775030674]
This work presents the development of Bessel-convolutional neural networks (B-CNNs)
B-CNNs exploit a particular decomposition based on Bessel functions to modify the key operation between images and filters.
Study is carried out to assess the performances of B-CNNs compared to other methods.
arXiv Detail & Related papers (2023-04-18T18:06:35Z) - Exploring layerwise decision making in DNNs [1.766593834306011]
We show that by encoding the discrete sample activation values of nodes as a binary representation, we are able to extract a decision tree.
We then combine these decision trees with existing feature attribution techniques in order to produce an interpretation of each layer of a model.
arXiv Detail & Related papers (2022-02-01T11:38:59Z) - Visualizing the Diversity of Representations Learned by Bayesian Neural
Networks [5.660714085843854]
We investigate how XAI methods can be used for exploring and visualizing the diversity of feature representations learned by Bayesian Neural Networks (BNNs)
Our work provides new insights into the emphposterior distribution in terms of human-understandable feature information with regard to the underlying decision making strategies.
arXiv Detail & Related papers (2022-01-26T10:40:55Z) - Multi-Semantic Image Recognition Model and Evaluating Index for
explaining the deep learning models [31.387124252490377]
We first propose a multi-semantic image recognition model, which enables human beings to understand the decision-making process of the neural network.
We then presents a new evaluation index, which can quantitatively assess the model interpretability.
This paper also exhibits the relevant baseline performance with current state-of-the-art deep learning models.
arXiv Detail & Related papers (2021-09-28T07:18:05Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.