Understanding the Role of Pathways in a Deep Neural Network
- URL: http://arxiv.org/abs/2402.18132v1
- Date: Wed, 28 Feb 2024 07:53:19 GMT
- Title: Understanding the Role of Pathways in a Deep Neural Network
- Authors: Lei Lyu, Chen Pang, Jihua Wang
- Abstract summary: We analyze a convolutional neural network (CNN) trained in the classification task and present an algorithm to extract the diffusion pathways of individual pixels.
We find that the few largest pathways of an individual pixel from an image tend to cross the feature maps in each layer that is important for classification.
- Score: 4.456675543894722
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have demonstrated superior performance in artificial
intelligence applications, but the opaqueness of their inner working mechanism
is one major drawback in their application. The prevailing unit-based
interpretation is a statistical observation of stimulus-response data, which
fails to show a detailed internal process of inherent mechanisms of neural
networks. In this work, we analyze a convolutional neural network (CNN) trained
in the classification task and present an algorithm to extract the diffusion
pathways of individual pixels to identify the locations of pixels in an input
image associated with object classes. The pathways allow us to test the causal
components which are important for classification and the pathway-based
representations are clearly distinguishable between categories. We find that
the few largest pathways of an individual pixel from an image tend to cross the
feature maps in each layer that is important for classification. And the large
pathways of images of the same category are more consistent in their trends
than those of different categories. We also apply the pathways to understanding
adversarial attacks, object completion, and movement perception. Further, the
total number of pathways on feature maps in all layers can clearly discriminate
the original, deformed, and target samples.
Related papers
- Finding Concept Representations in Neural Networks with Self-Organizing
Maps [2.817412580574242]
We show how self-organizing maps can be used to inspect how activation of layers of neural networks correspond to neural representations of abstract concepts.
We show that, among the measures tested, the relative entropy of the activation map for a concept is a suitable candidate and can be used as part of a methodology to identify and locate the neural representation of a concept.
arXiv Detail & Related papers (2023-12-10T12:10:34Z) - A domain adaptive deep learning solution for scanpath prediction of
paintings [66.46953851227454]
This paper focuses on the eye-movement analysis of viewers during the visual experience of a certain number of paintings.
We introduce a new approach to predicting human visual attention, which impacts several cognitive functions for humans.
The proposed new architecture ingests images and returns scanpaths, a sequence of points featuring a high likelihood of catching viewers' attention.
arXiv Detail & Related papers (2022-09-22T22:27:08Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z) - Graph Neural Networks for UnsupervisedDomain Adaptation of
Histopathological ImageAnalytics [22.04114134677181]
We present a novel method for the unsupervised domain adaptation for histological image analysis.
It is based on a backbone for embedding images into a feature space, and a graph neural layer for propa-gating the supervision signals of images with labels.
In experiments, our methodachieves state-of-the-art performance on four public datasets.
arXiv Detail & Related papers (2020-08-21T04:53:44Z) - Distractor-Aware Neuron Intrinsic Learning for Generic 2D Medical Image
Classifications [30.62607811479386]
We observe that the convolutional neural networks (CNNs) are vulnerable to distractor interference.
In this paper, we explore distractors from the CNN feature space via proposing a neuron intrinsic learning method.
The proposed method performs favorably against the state-of-the-art approaches.
arXiv Detail & Related papers (2020-07-20T09:59:04Z) - Locality Guided Neural Networks for Explainable Artificial Intelligence [12.435539489388708]
We propose a novel algorithm for back propagation, called Locality Guided Neural Network(LGNN)
LGNN preserves locality between neighbouring neurons within each layer of a deep network.
In our experiments, we train various VGG and Wide ResNet (WRN) networks for image classification on CIFAR100.
arXiv Detail & Related papers (2020-07-12T23:45:51Z) - Ventral-Dorsal Neural Networks: Object Detection via Selective Attention [51.79577908317031]
We propose a new framework called Ventral-Dorsal Networks (VDNets)
Inspired by the structure of the human visual system, we propose the integration of a "Ventral Network" and a "Dorsal Network"
Our experimental results reveal that the proposed method outperforms state-of-the-art object detection approaches.
arXiv Detail & Related papers (2020-05-15T23:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.