Exploring the Interchangeability of CNN Embedding Spaces
- URL: http://arxiv.org/abs/2010.02323v4
- Date: Fri, 12 Feb 2021 01:59:35 GMT
- Title: Exploring the Interchangeability of CNN Embedding Spaces
- Authors: David McNeely-White, Benjamin Sattelberg, Nathaniel Blanchard, Ross
Beveridge
- Abstract summary: We map between 10 image-classification CNNs and between 4 facial-recognition CNNs.
For CNNs trained to the same classes and sharing a common backend-logit architecture, a linear-mapping may always be calculated directly from the backend layer weights.
The implications are far-reaching, suggesting an underlying commonality between representations learned by networks designed and trained for a common task.
- Score: 0.5735035463793008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: CNN feature spaces can be linearly mapped and consequently are often
interchangeable. This equivalence holds across variations in architectures,
training datasets, and network tasks. Specifically, we mapped between 10
image-classification CNNs and between 4 facial-recognition CNNs. When image
embeddings generated by one CNN are transformed into embeddings corresponding
to the feature space of a second CNN trained on the same task, their respective
image classification or face verification performance is largely preserved. For
CNNs trained to the same classes and sharing a common backend-logit (soft-max)
architecture, a linear-mapping may always be calculated directly from the
backend layer weights. However, the case of a closed-set analysis with perfect
knowledge of classifiers is limiting. Therefore, empirical methods of
estimating mappings are presented for both the closed-set image classification
task and the open-set task of face recognition. The results presented expose
the essentially interchangeable nature of CNNs embeddings for two important and
common recognition tasks. The implications are far-reaching, suggesting an
underlying commonality between representations learned by networks designed and
trained for a common task. One practical implication is that face embeddings
from some commonly used CNNs can be compared using these mappings.
Related papers
- Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based
Comparison of Feature Spaces [0.0]
Safety-critical applications require transparency in artificial intelligence components.
convolutional neural networks (CNNs) widely used for perception tasks lack inherent interpretability.
We propose two methods for estimating the layer-wise similarity between semantic information inside CNN latent spaces.
arXiv Detail & Related papers (2023-04-30T13:53:39Z) - Random Padding Data Augmentation [23.70951896315126]
convolutional neural network (CNN) learns the same object in different positions in images.
The usefulness of the features' spatial information in CNNs has not been well investigated.
We introduce Random Padding, a new type of padding method for training CNNs.
arXiv Detail & Related papers (2023-02-17T04:15:33Z) - A novel feature-scrambling approach reveals the capacity of
convolutional neural networks to learn spatial relations [0.0]
Convolutional neural networks (CNNs) are one of the most successful computer vision systems to solve object recognition.
Yet it remains poorly understood how CNNs actually make their decisions, what the nature of their internal representations is, and how their recognition strategies differ from humans.
arXiv Detail & Related papers (2022-12-12T16:40:29Z) - Deeply Explain CNN via Hierarchical Decomposition [75.01251659472584]
In computer vision, some attribution methods for explaining CNNs attempt to study how the intermediate features affect the network prediction.
This paper introduces a hierarchical decomposition framework to explain CNN's decision-making process in a top-down manner.
arXiv Detail & Related papers (2022-01-23T07:56:04Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Assessing The Importance Of Colours For CNNs In Object Recognition [70.70151719764021]
Convolutional neural networks (CNNs) have been shown to exhibit conflicting properties.
We demonstrate that CNNs often rely heavily on colour information while making a prediction.
We evaluate a model trained with congruent images on congruent, greyscale, and incongruent images.
arXiv Detail & Related papers (2020-12-12T22:55:06Z) - Learning CNN filters from user-drawn image markers for coconut-tree
image classification [78.42152902652215]
We present a method that needs a minimal set of user-selected images to train the CNN's feature extractor.
The method learns the filters of each convolutional layer from user-drawn markers in image regions that discriminate classes.
It does not rely on optimization based on backpropagation, and we demonstrate its advantages on the binary classification of coconut-tree aerial images.
arXiv Detail & Related papers (2020-08-08T15:50:23Z) - Teaching CNNs to mimic Human Visual Cognitive Process & regularise
Texture-Shape bias [18.003188982585737]
Recent experiments in computer vision demonstrate texture bias as the primary reason for supreme results in models employing Convolutional Neural Networks (CNNs)
It is believed that the cost function forces the CNN to take a greedy approach and develop a proclivity for local information like texture to increase accuracy, thus failing to explore any global statistics.
We propose CognitiveCNN, a new intuitive architecture, inspired from feature integration theory in psychology to utilise human interpretable feature like shape, texture, edges etc. to reconstruct, and classify the image.
arXiv Detail & Related papers (2020-06-25T22:32:54Z) - A Systematic Evaluation: Fine-Grained CNN vs. Traditional CNN
Classifiers [54.996358399108566]
We investigate the performance of the landmark general CNN classifiers, which presented top-notch results on large scale classification datasets.
We compare it against state-of-the-art fine-grained classifiers.
We show an extensive evaluation on six datasets to determine whether the fine-grained classifier is able to elevate the baseline in their experiments.
arXiv Detail & Related papers (2020-03-24T23:49:14Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.