Interpretation of ResNet by Visualization of Preferred Stimulus in
Receptive Fields
- URL: http://arxiv.org/abs/2006.01645v2
- Date: Thu, 9 Jul 2020 11:26:17 GMT
- Title: Interpretation of ResNet by Visualization of Preferred Stimulus in
Receptive Fields
- Authors: Genta Kobayashi and Hayaru Shouno
- Abstract summary: We investigate the receptive fields of a ResNet on the classification task in ImageNet.
We find that ResNet has orientation selective neurons and double opponent color neurons.
In addition, we suggest that some inactive neurons in the first layer of ResNet affect the classification task.
- Score: 2.28438857884398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the methods used in image recognition is the Deep Convolutional Neural
Network (DCNN). DCNN is a model in which the expressive power of features is
greatly improved by deepening the hidden layer of CNN. The architecture of CNNs
is determined based on a model of the visual cortex of mammals. There is a
model called Residual Network (ResNet) that has a skip connection. ResNet is an
advanced model in terms of the learning method, but it has not been interpreted
from a biological viewpoint. In this research, we investigate the receptive
fields of a ResNet on the classification task in ImageNet. We find that ResNet
has orientation selective neurons and double opponent color neurons. In
addition, we suggest that some inactive neurons in the first layer of ResNet
affect the classification task.
Related papers
- CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Convolutional Neural Networks Exploiting Attributes of Biological
Neurons [7.3517426088986815]
Deep neural networks like Convolutional Neural Networks (CNNs) have emerged as front-runners, often surpassing human capabilities.
Here, we integrate the principles of biological neurons in certain layer(s) of CNNs.
We aim to extract image features to use as input to CNNs, hoping to enhance training efficiency and achieve better accuracy.
arXiv Detail & Related papers (2023-11-14T16:58:18Z) - Deep Neural Networks as Complex Networks [1.704936863091649]
We use Complex Network Theory to represent Deep Neural Networks (DNNs) as directed weighted graphs.
We introduce metrics to study DNNs as dynamical systems, with a granularity that spans from weights to layers, including neurons.
We show that our metrics discriminate low vs. high performing networks.
arXiv Detail & Related papers (2022-09-12T16:26:04Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - PCACE: A Statistical Approach to Ranking Neurons for CNN
Interpretability [1.0742675209112622]
We present a new statistical method for ranking the hidden neurons in any convolutional layer of a network.
We show a real-world application of our method to air pollution prediction with street-level images.
arXiv Detail & Related papers (2021-12-31T17:54:57Z) - BioLCNet: Reward-modulated Locally Connected Spiking Neural Networks [0.6193838300896449]
We propose a spiking neural network (SNN) trained using spike-timing-dependent plasticity (STDP) and its reward-modulated variant (R-STDP) learning rules.
Our network consists of a rate-coded input layer followed by a locally connected hidden layer and a decoding output layer.
We used the MNIST dataset to obtain image classification accuracy and to assess the robustness of our rewarding system to varying target responses.
arXiv Detail & Related papers (2021-09-12T15:28:48Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - DRU-net: An Efficient Deep Convolutional Neural Network for Medical
Image Segmentation [2.3574651879602215]
Residual network (ResNet) and densely connected network (DenseNet) have significantly improved the training efficiency and performance of deep convolutional neural networks (DCNNs)
We propose an efficient network architecture by considering advantages of both networks.
arXiv Detail & Related papers (2020-04-28T12:16:24Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.