Identifying Class Specific Filters with L1 Norm Frequency Histograms in
Deep CNNs
- URL: http://arxiv.org/abs/2112.07719v1
- Date: Tue, 14 Dec 2021 19:40:55 GMT
- Title: Identifying Class Specific Filters with L1 Norm Frequency Histograms in
Deep CNNs
- Authors: Akshay Badola, Cherian Roy, Vineet Padmanabhan, Rajendra Lal
- Abstract summary: We analyze the final and penultimate layers of Deep Convolutional Networks.
We identify subsets of features that contribute most towards the network's decision for a class.
- Score: 1.1278903078792917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interpretability of Deep Neural Networks has become a major area of
exploration. Although these networks have achieved state of the art accuracy in
many tasks, it is extremely difficult to interpret and explain their decisions.
In this work we analyze the final and penultimate layers of Deep Convolutional
Networks and provide an efficient method for identifying subsets of features
that contribute most towards the network's decision for a class. We demonstrate
that the number of such features per class is much lower in comparison to the
dimension of the final layer and therefore the decision surface of Deep CNNs
lies on a low dimensional manifold and is proportional to the network depth.
Our methods allow to decompose the final layer into separate subspaces which is
far more interpretable and has a lower computational cost as compared to the
final layer of the full network.
Related papers
- Neural Collapse in the Intermediate Hidden Layers of Classification
Neural Networks [0.0]
(NC) gives a precise description of the representations of classes in the final hidden layer of classification neural networks.
In the present paper, we provide the first comprehensive empirical analysis of the emergence of (NC) in the intermediate hidden layers.
arXiv Detail & Related papers (2023-08-05T01:19:38Z) - SAR Despeckling Using Overcomplete Convolutional Networks [53.99620005035804]
despeckling is an important problem in remote sensing as speckle degrades SAR images.
Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods.
This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field.
We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
arXiv Detail & Related papers (2022-05-31T15:55:37Z) - A layer-stress learning framework universally augments deep neural
network tasks [6.2067442999727644]
We present a layer-stress deep learning framework (x-NN) which implemented automatic and wise depth decision on shallow or deep feature map in a deep network.
x-NN showed outstanding prediction ability in the Alzheimer's Disease Classification Technique Challenge PRCV 2021, in which it won the top laurel and outperformed all other AI models.
arXiv Detail & Related papers (2021-11-14T15:14:13Z) - Dive into Layers: Neural Network Capacity Bounding using Algebraic
Geometry [55.57953219617467]
We show that the learnability of a neural network is directly related to its size.
We use Betti numbers to measure the topological geometric complexity of input data and the neural network.
We perform the experiments on a real-world dataset MNIST and the results verify our analysis and conclusion.
arXiv Detail & Related papers (2021-09-03T11:45:51Z) - Towards Interpretable Deep Networks for Monocular Depth Estimation [78.84690613778739]
We quantify the interpretability of a deep MDE network by the depth selectivity of its hidden units.
We propose a method to train interpretable MDE deep networks without changing their original architectures.
Experimental results demonstrate that our method is able to enhance the interpretability of deep MDE networks.
arXiv Detail & Related papers (2021-08-11T16:43:45Z) - Subspace Clustering Based Analysis of Neural Networks [7.451579925406617]
We learn affinity graphs from the latent structure of a given neural network layer trained over a set of inputs.
We then use tools from Community Detection to quantify structures present in the input.
We analyze the learned affinity graphs of the final convolutional layer of the network and demonstrate how an input's local neighbourhood affects its classification by the network.
arXiv Detail & Related papers (2021-07-02T22:46:40Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Pooling Methods in Deep Neural Networks, a Review [6.1678491628787455]
pooling layer is an important layer that executes the down-sampling on the feature maps coming from the previous layer.
In this paper, we reviewed some of the famous and useful pooling methods.
arXiv Detail & Related papers (2020-09-16T06:11:40Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.