Interpreting CNN Predictions using Conditional Generative Adversarial
Networks
- URL: http://arxiv.org/abs/2301.08067v3
- Date: Thu, 9 Nov 2023 15:59:56 GMT
- Title: Interpreting CNN Predictions using Conditional Generative Adversarial
Networks
- Authors: R T Akash Guna, Raul Benitez, O K Sikha
- Abstract summary: We train a conditional Generative Adversarial Network (GAN) to generate visual interpretations of a Convolutional Neural Network (CNN)
To comprehend a CNN, the GAN is trained with information on how the CNN processes an image when making predictions.
We developed a suitable representation of CNN architectures by cumulatively averaging intermediate interpretation maps.
- Score: 1.8416014644193066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel method that trains a conditional Generative Adversarial
Network (GAN) to generate visual interpretations of a Convolutional Neural
Network (CNN). To comprehend a CNN, the GAN is trained with information on how
the CNN processes an image when making predictions. Supplying that information
has two main challenges: how to represent this information in a form that is
feedable to the GANs and how to effectively feed the representation to the GAN.
To address these issues, we developed a suitable representation of CNN
architectures by cumulatively averaging intermediate interpretation maps. We
also propose two alternative approaches to feed the representations to the GAN
and to choose an effective training strategy. Our approach learned the general
aspects of CNNs and was agnostic to datasets and CNN architectures. The study
includes both qualitative and quantitative evaluations and compares the
proposed GANs with state-of-the-art approaches. We found that the initial
layers of CNNs and final layers are equally crucial for interpreting CNNs upon
interpreting the proposed GAN. We believe training a GAN to interpret CNNs
would open doors for improved interpretations by leveraging fast-paced deep
learning advancements. The code used for experimentation is publicly available
at https://github.com/Akash-guna/Explain-CNN-With-GANS
Related papers
- A Neurosymbolic Framework for Bias Correction in Convolutional Neural Networks [2.249916681499244]
We introduce a neurosymbolic framework called NeSyBiCor for bias correction in a trained CNN.
We show that our framework successfully corrects the biases of CNNs trained with subsets of classes from the "Places" dataset.
arXiv Detail & Related papers (2024-05-24T19:09:53Z) - CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - A novel feature-scrambling approach reveals the capacity of
convolutional neural networks to learn spatial relations [0.0]
Convolutional neural networks (CNNs) are one of the most successful computer vision systems to solve object recognition.
Yet it remains poorly understood how CNNs actually make their decisions, what the nature of their internal representations is, and how their recognition strategies differ from humans.
arXiv Detail & Related papers (2022-12-12T16:40:29Z) - Demystifying CNNs for Images by Matched Filters [13.121514086503591]
convolution neural networks (CNN) have been revolutionising the way we approach and use intelligent machines in the Big Data era.
CNNs have been put under scrutiny owing to their textitblack-box nature, as well as the lack of theoretical support and physical meanings of their operation.
This paper attempts to demystify the operation of CNNs by employing the perspective of matched filtering.
arXiv Detail & Related papers (2022-10-16T12:39:17Z) - Deeply Explain CNN via Hierarchical Decomposition [75.01251659472584]
In computer vision, some attribution methods for explaining CNNs attempt to study how the intermediate features affect the network prediction.
This paper introduces a hierarchical decomposition framework to explain CNN's decision-making process in a top-down manner.
arXiv Detail & Related papers (2022-01-23T07:56:04Z) - Interpretable Compositional Convolutional Neural Networks [20.726080433723922]
We propose a method to modify a traditional convolutional neural network (CNN) into an interpretable compositional CNN.
In a compositional CNN, each filter is supposed to consistently represent a specific compositional object part or image region with a clear meaning.
Our method can be broadly applied to different types of CNNs.
arXiv Detail & Related papers (2021-07-09T15:01:24Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.