Bridging Functional and Representational Similarity via Usable Information
- URL: http://arxiv.org/abs/2601.21568v1
- Date: Thu, 29 Jan 2026 11:30:55 GMT
- Title: Bridging Functional and Representational Similarity via Usable Information
- Authors: Antonio Almudévar, Alfonso Ortega,
- Abstract summary: We present a unified framework for quantifying the similarity between representations through the lens of textitusable information<n>First, addressing functional similarity, we establish a formal link between stitching performance and conditional mutual information.<n>Second, concerning representational similarity, we prove that reconstruction-based metrics and standard tools act as estimators of usable information under specific constraints.
- Score: 3.9189279162842854
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a unified framework for quantifying the similarity between representations through the lens of \textit{usable information}, offering a rigorous theoretical and empirical synthesis across three key dimensions. First, addressing functional similarity, we establish a formal link between stitching performance and conditional mutual information. We further reveal that stitching is inherently asymmetric, demonstrating that robust functional comparison necessitates a bidirectional analysis rather than a unidirectional mapping. Second, concerning representational similarity, we prove that reconstruction-based metrics and standard tools (e.g., CKA, RSA) act as estimators of usable information under specific constraints. Crucially, we show that similarity is relative to the capacity of the predictive family: representations that appear distinct to a rigid observer may be identical to a more expressive one. Third, we demonstrate that representational similarity is sufficient but not necessary for functional similarity. We unify these concepts through a task-granularity hierarchy: similarity on a complex task guarantees similarity on any coarser derivative, establishing representational similarity as the limit of maximum granularity: input reconstruction.
Related papers
- The Triangle of Similarity: A Multi-Faceted Framework for Comparing Neural Network Representations [5.415604247164019]
We propose the Triangle of Similarity, a framework that combines three complementary perspectives.<n> architectural family is a primary determinant of representational similarity, forming distinct clusters.<n>For some model pairs, pruning appears to regularize representations, exposing a shared computational core.
arXiv Detail & Related papers (2026-01-23T12:15:43Z) - Relational Visual Similarity [75.39827145344957]
relational similarity is arguable by cognitive scientist to be what distinguishes humans from other species.<n>All widely used visual similarity metrics today focus solely on perceptual attribute similarity.<n>Our study shows that while relational similarity has a lot of real-world applications, existing image similarity models fail to capture it.
arXiv Detail & Related papers (2025-12-08T18:59:56Z) - Unifying Information-Theoretic and Pair-Counting Clustering Similarity [51.660331450043806]
Clustering similarity measures are typically organized into two principal families, pair-counting and information-theoretic.<n>Here, we develop an analytical framework that unifies these families through two complementary perspectives.
arXiv Detail & Related papers (2025-11-04T21:13:32Z) - Objective drives the consistency of representational similarity across datasets [19.99817888941361]
We propose a systematic way to measure how representational similarity between models varies with the set of stimuli used to construct the representations.<n>Self-supervised vision models learn representations whose relative pairwise similarities generalize better from one dataset to another.<n>Our work provides a framework for analyzing similarities of model representations across datasets and linking those similarities to differences in task behavior.
arXiv Detail & Related papers (2024-11-08T13:35:45Z) - GSSF: Generalized Structural Sparse Function for Deep Cross-modal Metric Learning [51.677086019209554]
We propose a Generalized Structural Sparse to capture powerful relationships across modalities for pair-wise similarity learning.
The distance metric delicately encapsulates two formats of diagonal and block-diagonal terms.
Experiments on cross-modal and two extra uni-modal retrieval tasks have validated its superiority and flexibility.
arXiv Detail & Related papers (2024-10-20T03:45:50Z) - Weighted Point Set Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric [44.95433989446052]
We show the benefit of our proposed method through a new understanding of the contrastive loss of CLIP.<n>We show that our proposed similarity based on weighted point sets consistently achieves the optimal similarity.
arXiv Detail & Related papers (2024-04-30T03:15:04Z) - Interpretable Measures of Conceptual Similarity by
Complexity-Constrained Descriptive Auto-Encoding [112.0878081944858]
Quantifying the degree of similarity between images is a key copyright issue for image-based machine learning.
We seek to define and compute a notion of "conceptual similarity" among images that captures high-level relations.
Two highly dissimilar images can be discriminated early in their description, whereas conceptually dissimilar ones will need more detail to be distinguished.
arXiv Detail & Related papers (2024-02-14T03:31:17Z) - Attributable Visual Similarity Learning [90.69718495533144]
This paper proposes an attributable visual similarity learning (AVSL) framework for a more accurate and explainable similarity measure between images.
Motivated by the human semantic similarity cognition, we propose a generalized similarity learning paradigm to represent the similarity between two images with a graph.
Experiments on the CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate significant improvements over existing deep similarity learning methods.
arXiv Detail & Related papers (2022-03-28T17:35:31Z) - Generalized quantum similarity learning [0.0]
We propose using quantum networks (GQSim) for learning task-dependent (a)symmetric similarity between data that need not have the same dimensionality.
We demonstrate that the similarity measure derived using this technique is $(epsilon,gamma,tau)$-good, resulting in theoretically guaranteed performance.
arXiv Detail & Related papers (2022-01-07T03:28:19Z) - Instance Similarity Learning for Unsupervised Feature Representation [83.31011038813459]
We propose an instance similarity learning (ISL) method for unsupervised feature representation.
We employ the Generative Adversarial Networks (GAN) to mine the underlying feature manifold.
Experiments on image classification demonstrate the superiority of our method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-08-05T16:42:06Z) - Few-shot Visual Reasoning with Meta-analogical Contrastive Learning [141.2562447971]
We propose to solve a few-shot (or low-shot) visual reasoning problem, by resorting to analogical reasoning.
We extract structural relationships between elements in both domains, and enforce them to be as similar as possible with analogical learning.
We validate our method on RAVEN dataset, on which it outperforms state-of-the-art method, with larger gains when the training data is scarce.
arXiv Detail & Related papers (2020-07-23T14:00:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.