Deconfounded Representation Similarity for Comparison of Neural Networks
- URL: http://arxiv.org/abs/2202.00095v1
- Date: Mon, 31 Jan 2022 21:25:02 GMT
- Title: Deconfounded Representation Similarity for Comparison of Neural Networks
- Authors: Tianyu Cui, Yogesh Kumar, Pekka Marttinen, Samuel Kaski
- Abstract summary: Similarity metrics are confounded by the population structure of data items in the input space.
We show that deconfounding the similarity metrics increases the resolution of detecting semantically similar neural networks.
- Score: 16.23053104309891
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Similarity metrics such as representational similarity analysis (RSA) and
centered kernel alignment (CKA) have been used to compare layer-wise
representations between neural networks. However, these metrics are confounded
by the population structure of data items in the input space, leading to
spuriously high similarity for even completely random neural networks and
inconsistent domain relations in transfer learning. We introduce a simple and
generally applicable fix to adjust for the confounder with covariate adjustment
regression, which retains the intuitive invariance properties of the original
similarity measures. We show that deconfounding the similarity metrics
increases the resolution of detecting semantically similar neural networks.
Moreover, in real-world applications, deconfounding improves the consistency of
representation similarities with domain similarities in transfer learning, and
increases correlation with out-of-distribution accuracy.
Related papers
- Decoupling Semantic Similarity from Spatial Alignment for Neural Networks [4.801683210246596]
We argue that the spatial location of semantic objects does neither influence human perception nor deep learning classifiers.
This should be reflected in the definition of similarity between image responses for computer vision systems.
We measure semantic similarity between input responses by formulating it as a set-matching problem.
arXiv Detail & Related papers (2024-10-30T15:17:58Z) - Cluster-Aware Similarity Diffusion for Instance Retrieval [64.40171728912702]
Diffusion-based re-ranking is a common method used for retrieving instances by performing similarity propagation in a nearest neighbor graph.
We propose a novel Cluster-Aware Similarity (CAS) diffusion for instance retrieval.
arXiv Detail & Related papers (2024-06-04T14:19:50Z) - Multilayer Multiset Neuronal Networks -- MMNNs [55.2480439325792]
The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
arXiv Detail & Related papers (2023-08-28T12:55:13Z) - Counting Like Human: Anthropoid Crowd Counting on Modeling the
Similarity of Objects [92.80955339180119]
mainstream crowd counting methods regress density map and integrate it to obtain counting results.
Inspired by this, we propose a rational and anthropoid crowd counting framework.
arXiv Detail & Related papers (2022-12-02T07:00:53Z) - Beyond Instance Discrimination: Relation-aware Contrastive
Self-supervised Learning [75.46664770669949]
We present relation-aware contrastive self-supervised learning (ReCo) to integrate instance relations.
Our ReCo consistently gains remarkable performance improvements.
arXiv Detail & Related papers (2022-11-02T03:25:28Z) - Understanding Weight Similarity of Neural Networks via Chain
Normalization Rule and Hypothesis-Training-Testing [58.401504709365284]
We present a weight similarity measure that can quantify the weight similarity of non-volution neural networks.
We first normalize the weights of neural networks by a chain normalization rule, which is used to introduce weight-training representation learning.
We extend traditional hypothesis-testing method to validate the hypothesis on the weight similarity of neural networks.
arXiv Detail & Related papers (2022-08-08T19:11:03Z) - Attributable Visual Similarity Learning [90.69718495533144]
This paper proposes an attributable visual similarity learning (AVSL) framework for a more accurate and explainable similarity measure between images.
Motivated by the human semantic similarity cognition, we propose a generalized similarity learning paradigm to represent the similarity between two images with a graph.
Experiments on the CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate significant improvements over existing deep similarity learning methods.
arXiv Detail & Related papers (2022-03-28T17:35:31Z) - MNet-Sim: A Multi-layered Semantic Similarity Network to Evaluate
Sentence Similarity [0.0]
Similarity is a comparative-subjective measure that varies with the domain within which it is considered.
This paper presents a multi-layered semantic similarity network model built upon multiple similarity measures.
It is shown to have demonstrated better performance scores in assessing sentence similarity.
arXiv Detail & Related papers (2021-11-09T20:43:18Z) - Similarity and Matching of Neural Network Representations [0.0]
We employ a toolset -- dubbed Dr. Frankenstein -- to analyse the similarity of representations in deep neural networks.
We aim to match the activations on given layers of two trained neural networks by joining them with a stitching layer.
arXiv Detail & Related papers (2021-10-27T17:59:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.