How Convolutional Neural Network Architecture Biases Learned Opponency
and Colour Tuning
- URL: http://arxiv.org/abs/2010.02634v1
- Date: Tue, 6 Oct 2020 11:33:48 GMT
- Title: How Convolutional Neural Network Architecture Biases Learned Opponency
and Colour Tuning
- Authors: Ethan Harris, Daniela Mihai, Jonathon Hare
- Abstract summary: Recent work suggests that changing Convolutional Neural Network (CNN) architecture by introducing a bottleneck in the second layer can yield changes in learned function.
To understand this relationship fully requires a way of quantitatively comparing trained networks.
We propose an approach to obtaining spatial and colour tuning curves for convolutional neurons.
- Score: 1.0742675209112622
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work suggests that changing Convolutional Neural Network (CNN)
architecture by introducing a bottleneck in the second layer can yield changes
in learned function. To understand this relationship fully requires a way of
quantitatively comparing trained networks. The fields of electrophysiology and
psychophysics have developed a wealth of methods for characterising visual
systems which permit such comparisons. Inspired by these methods, we propose an
approach to obtaining spatial and colour tuning curves for convolutional
neurons, which can be used to classify cells in terms of their spatial and
colour opponency. We perform these classifications for a range of CNNs with
different depths and bottleneck widths. Our key finding is that networks with a
bottleneck show a strong functional organisation: almost all cells in the
bottleneck layer become both spatially and colour opponent, cells in the layer
following the bottleneck become non-opponent. The colour tuning data can
further be used to form a rich understanding of how colour is encoded by a
network. As a concrete demonstration, we show that shallower networks without a
bottleneck learn a complex non-linear colour system, whereas deeper networks
with tight bottlenecks learn a simple channel opponent code in the bottleneck
layer. We further develop a method of obtaining a hue sensitivity curve for a
trained CNN which enables high level insights that complement the low level
findings from the colour tuning data. We go on to train a series of networks
under different conditions to ascertain the robustness of the discussed
results. Ultimately, our methods and findings coalesce with prior art,
strengthening our ability to interpret trained CNNs and furthering our
understanding of the connection between architecture and learned
representation. Code for all experiments is available at
https://github.com/ecs-vlc/opponency.
Related papers
- Linking in Style: Understanding learned features in deep learning models [0.0]
Convolutional neural networks (CNNs) learn abstract features to perform object classification.
We propose an automatic method to visualize and systematically analyze learned features in CNNs.
arXiv Detail & Related papers (2024-09-25T12:28:48Z) - Training Convolutional Neural Networks with the Forward-Forward
algorithm [1.74440662023704]
Forward Forward (FF) algorithm has up to now only been used in fully connected networks.
We show how the FF paradigm can be extended to CNNs.
Our FF-trained CNN, featuring a novel spatially-extended labeling technique, achieves a classification accuracy of 99.16% on the MNIST hand-written digits dataset.
arXiv Detail & Related papers (2023-12-22T18:56:35Z) - Color Equivariant Convolutional Networks [50.655443383582124]
CNNs struggle if there is data imbalance between color variations introduced by accidental recording conditions.
We propose Color Equivariant Convolutions ( CEConvs), a novel deep learning building block that enables shape feature sharing across the color spectrum.
We demonstrate the benefits of CEConvs in terms of downstream performance to various tasks and improved robustness to color changes, including train-test distribution shifts.
arXiv Detail & Related papers (2023-10-30T09:18:49Z) - Learning to Structure an Image with Few Colors and Beyond [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure an image in limited color spaces by minimizing the classification loss.
We introduce ColorCNN+, which supports multiple color space size configurations, and addresses the previous issues of poor recognition accuracy and undesirable visual fidelity under large color spaces.
For potential applications, we show that ColorCNNs can be used as image compression methods for network recognition.
arXiv Detail & Related papers (2022-08-17T17:59:15Z) - What Can Be Learnt With Wide Convolutional Neural Networks? [69.55323565255631]
We study infinitely-wide deep CNNs in the kernel regime.
We prove that deep CNNs adapt to the spatial scale of the target function.
We conclude by computing the generalisation error of a deep CNN trained on the output of another deep CNN.
arXiv Detail & Related papers (2022-08-01T17:19:32Z) - RGB-D SLAM Using Attention Guided Frame Association [11.484398586420067]
We propose the use of task specific network attention for RGB-D indoor SLAM.
We integrate layer-wise object attention information (layer gradients) with CNN layer representations to improve frame association performance.
Experiments show promising initial results with improved performance.
arXiv Detail & Related papers (2022-01-28T11:23:29Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.