Measuring the Measures: Discriminative Capacity of Representational Similarity Metrics Across Model Families
- URL: http://arxiv.org/abs/2509.04622v4
- Date: Tue, 21 Oct 2025 04:31:57 GMT
- Title: Measuring the Measures: Discriminative Capacity of Representational Similarity Metrics Across Model Families
- Authors: Jialin Wu, Shreya Saha, Yiqing Bo, Meenakshi Khosla,
- Abstract summary: We introduce a framework to evaluate representational similarity measures based on their ability to separate model families.<n>We use three complementary separability measures-dprime from signal detection theory, silhouette coefficients and ROC-AUC.<n>We show that separability systematically increases as metrics impose more stringent alignment constraints.
- Score: 8.045700364123645
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Representational similarity metrics are fundamental tools in neuroscience and AI, yet we lack systematic comparisons of their discriminative power across model families. We introduce a quantitative framework to evaluate representational similarity measures based on their ability to separate model families-across architectures (CNNs, Vision Transformers, Swin Transformers, ConvNeXt) and training regimes (supervised vs. self-supervised). Using three complementary separability measures-dprime from signal detection theory, silhouette coefficients and ROC-AUC, we systematically assess the discriminative capacity of commonly used metrics including RSA, linear predictivity, Procrustes, and soft matching. We show that separability systematically increases as metrics impose more stringent alignment constraints. Among mapping-based approaches, soft-matching achieves the highest separability, followed by Procrustes alignment and linear predictivity. Non-fitting methods such as RSA also yield strong separability across families. These results provide the first systematic comparison of similarity metrics through a separability lens, clarifying their relative sensitivity and guiding metric choice for large-scale model and brain comparisons.
Related papers
- Unifying Information-Theoretic and Pair-Counting Clustering Similarity [51.660331450043806]
Clustering similarity measures are typically organized into two principal families, pair-counting and information-theoretic.<n>Here, we develop an analytical framework that unifies these families through two complementary perspectives.
arXiv Detail & Related papers (2025-11-04T21:13:32Z) - Integrated representational signatures strengthen specificity in brains and models [8.045700364123645]
Similarity Network Fusion (SNF) is a framework originally developed for multi-omics data integration.<n>SNF produces substantially sharper regional and model family-level separation than any single metric.<n>Clustering cortical regions using SNF-derived similarity scores reveals a clearer hierarchical organization.
arXiv Detail & Related papers (2025-10-21T04:37:27Z) - Evaluating Representational Similarity Measures from the Lens of Functional Correspondence [1.7811840395202345]
Neuroscience and artificial intelligence (AI) both face the challenge of interpreting high-dimensional neural data.
Despite the widespread use of representational comparisons, a critical question remains: which metrics are most suitable for these comparisons?
arXiv Detail & Related papers (2024-11-21T23:53:58Z) - Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - Duality of Bures and Shape Distances with Implications for Comparing
Neural Representations [6.698235069945606]
A multitude of (dis)similarity measures between neural network representations have been proposed, resulting in a fragmented research landscape.
First, measures such as linear regression, canonical correlations analysis (CCA), and shape distances, all learn explicit mappings between neural units to quantify similarity.
Second, measures such as representational similarity analysis (RSA), centered kernel alignment (CKA), and normalized Bures similarity (NBS) all quantify similarity in summary statistics.
arXiv Detail & Related papers (2023-11-19T22:17:09Z) - Enriching Disentanglement: From Logical Definitions to Quantitative Metrics [59.12308034729482]
Disentangling the explanatory factors in complex data is a promising approach for data-efficient representation learning.
We establish relationships between logical definitions and quantitative metrics to derive theoretically grounded disentanglement metrics.
We empirically demonstrate the effectiveness of the proposed metrics by isolating different aspects of disentangled representations.
arXiv Detail & Related papers (2023-05-19T08:22:23Z) - Variable Importance Matching for Causal Inference [73.25504313552516]
We describe a general framework called Model-to-Match that achieves these goals.
Model-to-Match uses variable importance measurements to construct a distance metric.
We operationalize the Model-to-Match framework with LASSO.
arXiv Detail & Related papers (2023-02-23T00:43:03Z) - Counting Like Human: Anthropoid Crowd Counting on Modeling the
Similarity of Objects [92.80955339180119]
mainstream crowd counting methods regress density map and integrate it to obtain counting results.
Inspired by this, we propose a rational and anthropoid crowd counting framework.
arXiv Detail & Related papers (2022-12-02T07:00:53Z) - Representational Multiplicity Should Be Exposed, Not Eliminated [27.495944788838457]
Two machine learning models with similar performance during training can have very different real-world performance characteristics.
This implies elusive differences in the internals of the models, manifesting as representational multiplicity (RM)
We introduce a conceptual and experimental setup for analyzing RM and show that certain training methods systematically result in greater RM than others.
arXiv Detail & Related papers (2022-06-17T16:53:12Z) - Never mind the metrics -- what about the uncertainty? Visualising
confusion matrix metric distributions [6.566615606042994]
This paper strives for a more balanced perspective on classifier performance metrics by highlighting their distributions under different models of uncertainty.
We develop equations, animations and interactive visualisations of the contours of performance metrics within (and beyond) this ROC space.
Our hope is that these insights and visualisations will raise greater awareness of the substantial uncertainty in performance metric estimates.
arXiv Detail & Related papers (2022-06-05T11:54:59Z) - Scalable Intervention Target Estimation in Linear Models [52.60799340056917]
Current approaches to causal structure learning either work with known intervention targets or use hypothesis testing to discover the unknown intervention targets.
This paper proposes a scalable and efficient algorithm that consistently identifies all intervention targets.
The proposed algorithm can be used to also update a given observational Markov equivalence class into the interventional Markov equivalence class.
arXiv Detail & Related papers (2021-11-15T03:16:56Z) - A Novel Intrinsic Measure of Data Separability [0.0]
In machine learning, the performance of a classifier depends on the separability/complexity of datasets.
We create an intrinsic measure -- the Distance-based Separability Index (DSI)
We show that the DSI can indicate whether the distributions of datasets are identical for any dimensionality.
arXiv Detail & Related papers (2021-09-11T04:20:08Z) - Learning from Aggregate Observations [82.44304647051243]
We study the problem of learning from aggregate observations where supervision signals are given to sets of instances.
We present a general probabilistic framework that accommodates a variety of aggregate observations.
Simple maximum likelihood solutions can be applied to various differentiable models.
arXiv Detail & Related papers (2020-04-14T06:18:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.