PCACE: A Statistical Approach to Ranking Neurons for CNN
Interpretability
- URL: http://arxiv.org/abs/2112.15571v1
- Date: Fri, 31 Dec 2021 17:54:57 GMT
- Title: PCACE: A Statistical Approach to Ranking Neurons for CNN
Interpretability
- Authors: S\'ilvia Casacuberta, Esra Suel, Seth Flaxman
- Abstract summary: We present a new statistical method for ranking the hidden neurons in any convolutional layer of a network.
We show a real-world application of our method to air pollution prediction with street-level images.
- Score: 1.0742675209112622
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we introduce a new problem within the growing literature of
interpretability for convolution neural networks (CNNs). While previous work
has focused on the question of how to visually interpret CNNs, we ask what it
is that we care to interpret, that is, which layers and neurons are worth our
attention? Due to the vast size of modern deep learning network architectures,
automated, quantitative methods are needed to rank the relative importance of
neurons so as to provide an answer to this question. We present a new
statistical method for ranking the hidden neurons in any convolutional layer of
a network. We define importance as the maximal correlation between the
activation maps and the class score. We provide different ways in which this
method can be used for visualization purposes with MNIST and ImageNet, and show
a real-world application of our method to air pollution prediction with
street-level images.
Related papers
- CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - Feature CAM: Interpretable AI in Image Classification [2.4409988934338767]
There is a lack of trust to use Artificial Intelligence in critical and high-precision fields such as security, finance, health, and manufacturing industries.
We introduce a novel technique Feature CAM, which falls in the perturbation-activation combination, to create fine-grained, class-discriminative visualizations.
The resulting saliency maps proved to be 3-4 times better human interpretable than the state-of-the-art in ABM.
arXiv Detail & Related papers (2024-03-08T20:16:00Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - What do CNN neurons learn: Visualization & Clustering [0.0]
convolutional neural networks (CNN) have shown striking progress in various tasks.
Despite the high performance, the training and prediction process remains to be a black box.
We address the problem of interpreting a CNN from the aspects of the input image's focus and preference.
arXiv Detail & Related papers (2020-10-18T05:29:22Z) - Locality Guided Neural Networks for Explainable Artificial Intelligence [12.435539489388708]
We propose a novel algorithm for back propagation, called Locality Guided Neural Network(LGNN)
LGNN preserves locality between neighbouring neurons within each layer of a deep network.
In our experiments, we train various VGG and Wide ResNet (WRN) networks for image classification on CIFAR100.
arXiv Detail & Related papers (2020-07-12T23:45:51Z) - An Information-theoretic Visual Analysis Framework for Convolutional
Neural Networks [11.15523311079383]
We introduce a data model to organize the data that can be extracted from CNN models.
We then propose two ways to calculate entropy under different circumstances.
We develop a visual analysis system, CNNSlicer, to interactively explore the amount of information changes inside the model.
arXiv Detail & Related papers (2020-05-02T21:36:50Z) - Stochastic encoding of graphs in deep learning allows for complex
analysis of gender classification in resting-state and task functional brain
networks from the UK Biobank [0.13706331473063876]
We introduce a encoding method in an ensemble of CNNs to classify functional connectomes by gender.
We measure the salience of three brain networks involved in task- and resting-states, and their interaction.
arXiv Detail & Related papers (2020-02-25T15:10:51Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z) - Understanding Graph Isomorphism Network for rs-fMRI Functional
Connectivity Analysis [49.05541693243502]
We develop a framework for analyzing fMRI data using the Graph Isomorphism Network (GIN)
One of the important contributions of this paper is the observation that the GIN is a dual representation of convolutional neural network (CNN) in the graph space.
We exploit CNN-based saliency map techniques for the GNN, which we tailor to the proposed GIN with one-hot encoding.
arXiv Detail & Related papers (2020-01-10T23:40:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.