Similarity and Matching of Neural Network Representations
- URL: http://arxiv.org/abs/2110.14633v1
- Date: Wed, 27 Oct 2021 17:59:46 GMT
- Title: Similarity and Matching of Neural Network Representations
- Authors: Adri\'an Csisz\'arik, P\'eter K\H{o}r\"osi-Szab\'o, \'Akos K.
Matszangosz, Gergely Papp, D\'aniel Varga
- Abstract summary: We employ a toolset -- dubbed Dr. Frankenstein -- to analyse the similarity of representations in deep neural networks.
We aim to match the activations on given layers of two trained neural networks by joining them with a stitching layer.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We employ a toolset -- dubbed Dr. Frankenstein -- to analyse the similarity
of representations in deep neural networks. With this toolset, we aim to match
the activations on given layers of two trained neural networks by joining them
with a stitching layer. We demonstrate that the inner representations emerging
in deep convolutional neural networks with the same architecture but different
initializations can be matched with a surprisingly high degree of accuracy even
with a single, affine stitching layer. We choose the stitching layer from
several possible classes of linear transformations and investigate their
performance and properties. The task of matching representations is closely
related to notions of similarity. Using this toolset, we also provide a novel
viewpoint on the current line of research regarding similarity indices of
neural network representations: the perspective of the performance on a task.
Related papers
- Identifying Sub-networks in Neural Networks via Functionally Similar Representations [41.028797971427124]
We take a step toward automating the understanding of the network by investigating the existence of distinct sub-networks.
Our approach offers meaningful insights into the behavior of neural networks with minimal human and computational cost.
arXiv Detail & Related papers (2024-10-21T20:19:00Z) - Relational Composition in Neural Networks: A Survey and Call to Action [54.47858085003077]
Many neural nets appear to represent data as linear combinations of "feature vectors"
We argue that this success is incomplete without an understanding of relational composition.
arXiv Detail & Related papers (2024-07-19T20:50:57Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Multilayer Multiset Neuronal Networks -- MMNNs [55.2480439325792]
The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
arXiv Detail & Related papers (2023-08-28T12:55:13Z) - Model Stitching: Looking For Functional Similarity Between
Representations [5.657258033928475]
We expand on a previous work which used model stitching to compare representations of the same shapes learned by differently seeded and/or trained neural networks of the same architecture.
We reveal unexpected behavior of model stitching. Namely, we find that stitching, based on convolutions, for small ResNets, can reach high accuracy if those layers come later in the first (sender) network than in the second (receiver)
arXiv Detail & Related papers (2023-03-20T17:12:42Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Comparing Deep Neural Nets with UMAP Tour [12.910602784766562]
UMAP Tour is built to visually inspect and compare internal behavior of real-world neural network models.
We find concepts learned in state-of-the-art models and dissimilarities between them, such as GoogLeNet and ResNet.
arXiv Detail & Related papers (2021-10-18T15:59:13Z) - Similarity of Neural Networks with Gradients [8.804507286438781]
We propose to leverage both feature vectors and gradient ones into designing the representation of a neural network.
We show that the proposed approach provides a state-of-the-art method for computing similarity of neural networks.
arXiv Detail & Related papers (2020-03-25T17:04:10Z) - Contrastive Similarity Matching for Supervised Learning [13.750624267664156]
We propose a biologically-plausible solution to the credit assignment problem motivated by observations in the ventral visual pathway and trained deep neural networks.
In both, representations of objects in the same category become progressively more similar, while objects belonging to different categories become less similar.
We formulate this idea using a contrastive similarity matching objective function and derive from it deep neural networks with feedforward, lateral, and feedback connections.
arXiv Detail & Related papers (2020-02-24T17:10:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.