Visualizing the Emergence of Intermediate Visual Patterns in DNNs
- URL: http://arxiv.org/abs/2111.03505v1
- Date: Fri, 5 Nov 2021 13:49:39 GMT
- Title: Visualizing the Emergence of Intermediate Visual Patterns in DNNs
- Authors: Mingjie Li, Shaobo Wang, Quanshi Zhang
- Abstract summary: This paper proposes a method to visualize the discrimination power of intermediate-layer visual patterns encoded by a DNN.
We visualize how the DNN gradually learns regional visual patterns in each intermediate layer during the training process.
This method also provides new insights into signal-processing behaviors of existing deep-learning techniques, such as adversarial attacks and knowledge distillation.
- Score: 19.043540343193946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a method to visualize the discrimination power of
intermediate-layer visual patterns encoded by a DNN. Specifically, we visualize
(1) how the DNN gradually learns regional visual patterns in each intermediate
layer during the training process, and (2) the effects of the DNN using
non-discriminative patterns in low layers to construct disciminative patterns
in middle/high layers through the forward propagation. Based on our
visualization method, we can quantify knowledge points (i.e., the number of
discriminative visual patterns) learned by the DNN to evaluate the
representation capacity of the DNN. Furthermore, this method also provides new
insights into signal-processing behaviors of existing deep-learning techniques,
such as adversarial attacks and knowledge distillation.
Related papers
- Supervised Gradual Machine Learning for Aspect Category Detection [0.9857683394266679]
Aspect Category Detection (ACD) aims to identify implicit and explicit aspects in a given review sentence.
We propose a novel approach to tackle the ACD task by combining Deep Neural Networks (DNNs) with Gradual Machine Learning (GML) in a supervised setting.
arXiv Detail & Related papers (2024-04-08T07:21:46Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Characterising representation dynamics in recurrent neural networks for
object recognition [0.0]
Recurrent neural networks (RNNs) have yielded promising results for both recognizing objects in challenging conditions and modeling aspects of primate vision.
Here, we study the representational dynamics of recurrent computations in RNNs trained for object classification on MiniEcoset.
arXiv Detail & Related papers (2023-08-23T21:36:35Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Experimental Observations of the Topology of Convolutional Neural
Network Activations [2.4235626091331737]
Topological data analysis provides compact, noise-robust representations of complex structures.
Deep neural networks (DNNs) learn millions of parameters associated with a series of transformations defined by the model architecture.
In this paper, we apply cutting edge techniques from TDA with the goal of gaining insight into the interpretability of convolutional neural networks used for image classification.
arXiv Detail & Related papers (2022-12-01T02:05:44Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Towards interpreting computer vision based on transformation invariant
optimization [10.820985444099536]
In this work, visualized images that can activate the neural network to the target classes are generated by back-propagation method.
We show some cases that such method can help us to gain insight into neural networks.
arXiv Detail & Related papers (2021-06-18T08:04:10Z) - Variational Structured Attention Networks for Deep Visual Representation
Learning [49.80498066480928]
We propose a unified deep framework to jointly learn both spatial attention maps and channel attention in a principled manner.
Specifically, we integrate the estimation and the interaction of the attentions within a probabilistic representation learning framework.
We implement the inference rules within the neural network, thus allowing for end-to-end learning of the probabilistic and the CNN front-end parameters.
arXiv Detail & Related papers (2021-03-05T07:37:24Z) - What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space [88.37185513453758]
We propose a method to visualize and understand the class-wise knowledge learned by deep neural networks (DNNs) under different settings.
Our method searches for a single predictive pattern in the pixel space to represent the knowledge learned by the model for each class.
In the adversarial setting, we show that adversarially trained models tend to learn more simplified shape patterns.
arXiv Detail & Related papers (2021-01-18T06:38:41Z) - Explaining Deep Neural Networks using Unsupervised Clustering [12.639074798397619]
We propose a novel method to explain trained deep neural networks (DNNs) by distilling them into surrogate models using unsupervised clustering.
Our method can be applied flexibly to any subset of layers of a DNN architecture and can incorporate low-level and high-level information.
arXiv Detail & Related papers (2020-07-15T04:49:43Z) - Ventral-Dorsal Neural Networks: Object Detection via Selective Attention [51.79577908317031]
We propose a new framework called Ventral-Dorsal Networks (VDNets)
Inspired by the structure of the human visual system, we propose the integration of a "Ventral Network" and a "Dorsal Network"
Our experimental results reveal that the proposed method outperforms state-of-the-art object detection approaches.
arXiv Detail & Related papers (2020-05-15T23:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.