The Triangle of Similarity: A Multi-Faceted Framework for Comparing Neural Network Representations
- URL: http://arxiv.org/abs/2601.17093v1
- Date: Fri, 23 Jan 2026 12:15:43 GMT
- Title: The Triangle of Similarity: A Multi-Faceted Framework for Comparing Neural Network Representations
- Authors: Olha Sirikova, Alvin Chan,
- Abstract summary: We propose the Triangle of Similarity, a framework that combines three complementary perspectives.<n> architectural family is a primary determinant of representational similarity, forming distinct clusters.<n>For some model pairs, pruning appears to regularize representations, exposing a shared computational core.
- Score: 5.415604247164019
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Comparing neural network representations is essential for understanding and validating models in scientific applications. Existing methods, however, often provide a limited view. We propose the Triangle of Similarity, a framework that combines three complementary perspectives: static representational similarity (CKA/Procrustes), functional similarity (Linear Mode Connectivity or Predictive Similarity), and sparsity similarity (robustness under pruning). Analyzing a range of CNNs, Vision Transformers, and Vision-Language Models using both in-distribution (ImageNetV2) and out-of-distribution (CIFAR-10) testbeds, our initial findings suggest that: (1) architectural family is a primary determinant of representational similarity, forming distinct clusters; (2) CKA self-similarity and task accuracy are strongly correlated during pruning, though accuracy often degrades more sharply; and (3) for some model pairs, pruning appears to regularize representations, exposing a shared computational core. This framework offers a more holistic approach for assessing whether models have converged on similar internal mechanisms, providing a useful tool for model selection and analysis in scientific research.
Related papers
- Bridging Functional and Representational Similarity via Usable Information [3.9189279162842854]
We present a unified framework for quantifying the similarity between representations through the lens of textitusable information<n>First, addressing functional similarity, we establish a formal link between stitching performance and conditional mutual information.<n>Second, concerning representational similarity, we prove that reconstruction-based metrics and standard tools act as estimators of usable information under specific constraints.
arXiv Detail & Related papers (2026-01-29T11:30:55Z) - Objective drives the consistency of representational similarity across datasets [19.99817888941361]
We propose a systematic way to measure how representational similarity between models varies with the set of stimuli used to construct the representations.<n>Self-supervised vision models learn representations whose relative pairwise similarities generalize better from one dataset to another.<n>Our work provides a framework for analyzing similarities of model representations across datasets and linking those similarities to differences in task behavior.
arXiv Detail & Related papers (2024-11-08T13:35:45Z) - Tracing Representation Progression: Analyzing and Enhancing Layer-Wise Similarity [20.17288970927518]
We study the similarity of representations between the hidden layers of individual transformers.<n>We show that representations across layers are positively correlated, with similarity increasing when layers get closer.<n>We propose an aligned training method to improve the effectiveness of shallow layer.
arXiv Detail & Related papers (2024-06-20T16:41:09Z) - Explicit Correspondence Matching for Generalizable Neural Radiance Fields [66.99907718824782]
We present a new NeRF method that is able to generalize to new unseen scenarios and perform novel view synthesis with as few as two source views.<n>The explicit correspondence matching is quantified with the cosine similarity between image features sampled at the 2D projections of a 3D point on different views.<n>Our method achieves state-of-the-art results on different evaluation settings, with the experiments showing a strong correlation between our learned cosine feature similarity and volume density.
arXiv Detail & Related papers (2023-04-24T17:46:01Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Counting Like Human: Anthropoid Crowd Counting on Modeling the
Similarity of Objects [92.80955339180119]
mainstream crowd counting methods regress density map and integrate it to obtain counting results.
Inspired by this, we propose a rational and anthropoid crowd counting framework.
arXiv Detail & Related papers (2022-12-02T07:00:53Z) - Similarity of Neural Architectures using Adversarial Attack Transferability [47.66096554602005]
We design a quantitative and scalable similarity measure between neural architectures.
We conduct a large-scale analysis on 69 state-of-the-art ImageNet classifiers.
Our results provide insights into why developing diverse neural architectures with distinct components is necessary.
arXiv Detail & Related papers (2022-10-20T16:56:47Z) - Duality-Induced Regularizer for Semantic Matching Knowledge Graph
Embeddings [70.390286614242]
We propose a novel regularizer -- namely, DUality-induced RegulArizer (DURA) -- which effectively encourages the entities with similar semantics to have similar embeddings.
Experiments demonstrate that DURA consistently and significantly improves the performance of state-of-the-art semantic matching models.
arXiv Detail & Related papers (2022-03-24T09:24:39Z) - Represent, Compare, and Learn: A Similarity-Aware Framework for
Class-Agnostic Counting [30.34585324943777]
Class-agnostic counting aims to count all instances in a query image given few exemplars.
Existing methods either adopt a pretrained network to represent features or learn a new one.
We propose a similarity-aware CAC framework that jointly learns representation and similarity metric.
arXiv Detail & Related papers (2022-03-16T02:24:25Z) - Colar: Effective and Efficient Online Action Detection by Consulting
Exemplars [102.28515426925621]
We develop an effective exemplar-consultation mechanism that first measures the similarity between a frame and exemplary frames, and then aggregates exemplary features based on the similarity weights.
Due to the complementarity from the category-level modeling, our method employs a lightweight architecture but achieves new high performance on three benchmarks.
arXiv Detail & Related papers (2022-03-02T12:13:08Z) - Few-shot Visual Reasoning with Meta-analogical Contrastive Learning [141.2562447971]
We propose to solve a few-shot (or low-shot) visual reasoning problem, by resorting to analogical reasoning.
We extract structural relationships between elements in both domains, and enforce them to be as similar as possible with analogical learning.
We validate our method on RAVEN dataset, on which it outperforms state-of-the-art method, with larger gains when the training data is scarce.
arXiv Detail & Related papers (2020-07-23T14:00:34Z) - A Bootstrap-based Method for Testing Network Similarity [0.0]
This paper studies the matched network inference problem.<n>The goal is to determine if two networks, defined on a common set of nodes, exhibit a specific form of similarity.<n>Two notions of similarity are considered: (i) equality, i.e., testing whether the networks arise from the same random graph model, and (ii) scaling, i.e., testing whether their probability are proportional for some unknown scaling constant.
arXiv Detail & Related papers (2019-11-15T20:50:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.