Integrated representational signatures strengthen specificity in brains and models
- URL: http://arxiv.org/abs/2510.20847v1
- Date: Tue, 21 Oct 2025 04:37:27 GMT
- Title: Integrated representational signatures strengthen specificity in brains and models
- Authors: Jialin Wu, Shreya Saha, Yiqing Bo, Meenakshi Khosla,
- Abstract summary: Similarity Network Fusion (SNF) is a framework originally developed for multi-omics data integration.<n>SNF produces substantially sharper regional and model family-level separation than any single metric.<n>Clustering cortical regions using SNF-derived similarity scores reveals a clearer hierarchical organization.
- Score: 8.045700364123645
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The extent to which different neural or artificial neural networks (models) rely on equivalent representations to support similar tasks remains a central question in neuroscience and machine learning. Prior work has typically compared systems using a single representational similarity metric, yet each captures only one facet of representational structure. To address this, we leverage a suite of representational similarity metrics-each capturing a distinct facet of representational correspondence, such as geometry, unit-level tuning, or linear decodability-and assess brain region or model separability using multiple complementary measures. Metrics that preserve geometric or tuning structure (e.g., RSA, Soft Matching) yield stronger region-based discrimination, whereas more flexible mappings such as Linear Predictivity show weaker separation. These findings suggest that geometry and tuning encode brain-region- or model-family-specific signatures, while linearly decodable information tends to be more globally shared across regions or models. To integrate these complementary representational facets, we adapt Similarity Network Fusion (SNF), a framework originally developed for multi-omics data integration. SNF produces substantially sharper regional and model family-level separation than any single metric and yields robust composite similarity profiles. Moreover, clustering cortical regions using SNF-derived similarity scores reveals a clearer hierarchical organization that aligns closely with established anatomical and functional hierarchies of the visual cortex-surpassing the correspondence achieved by individual metrics.
Related papers
- Barycentric alignment for instance-level comparison of neural representations [2.1920579994942164]
We introduce a barycentric alignment framework that quotients out nuisance symmetries to construct a universal embedding space across many models.<n>We identify systematic input properties that predict representational convergence versus divergence across vision and language model families.<n>We also apply the same barycentric alignment framework to purely unimodal vision and language models and find that post-hoc alignment into a shared space yields image text similarity scores.
arXiv Detail & Related papers (2026-02-09T21:49:44Z) - Local Intrinsic Dimension of Representations Predicts Alignment and Generalization in AI Models and Human Brain [14.072972213206524]
Recent work has found that neural networks with stronger generalization tend to exhibit higher representational alignment with one another.<n>We show that models with stronger generalization also align more strongly with human neural activity.<n>These relationships can be explained by a single geometric property of learned representations: the local intrinsic dimension of embeddings.
arXiv Detail & Related papers (2026-01-30T08:54:59Z) - A Data-driven Typology of Vision Models from Integrated Representational Metrics [8.045700364123645]
Large vision models differ widely in architecture and training paradigm, yet we lack principled methods to determine which aspects of their representations are shared across families.<n>We leverage a suite of representational similarity metrics, each capturing a different facet-geometry, unit tuning, or linear decodability-and assess family separability.<n>We adapt Similarity Network Fusion (SNF), a method inspired by multi-omics integration, to integrate these complementary facets.
arXiv Detail & Related papers (2025-09-25T21:46:09Z) - Scale-Invariance Drives Convergence in AI and Brain Representations [7.318297580732467]
Recent studies indicate that large-scale AI models often converge toward similar internal representations that also align with neural activity.<n>We quantify two core aspects of scale-invariance in AI representations: dimensional stability and structural similarity across scales.<n>Our analysis reveals that embeddings with more consistent dimension and higher structural similarity across scales align better with fMRI data.
arXiv Detail & Related papers (2025-06-13T15:36:04Z) - Connecting Neural Models Latent Geometries with Relative Geodesic Representations [21.71782603770616]
We show that when a latent structure is shared between distinct latent spaces, relative distances between representations can be preserved, up to distortions.<n>We assume that distinct neural models parametrize approximately the same underlying manifold, and introduce a representation based on the pullback metric.<n>We validate our method on model stitching and retrieval tasks, covering autoencoders and vision foundation discriminative models.
arXiv Detail & Related papers (2025-06-02T12:34:55Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Multilayer Multiset Neuronal Networks -- MMNNs [55.2480439325792]
The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
arXiv Detail & Related papers (2023-08-28T12:55:13Z) - Histopathology Whole Slide Image Analysis with Heterogeneous Graph
Representation Learning [78.49090351193269]
We propose a novel graph-based framework to leverage the inter-relationships among different types of nuclei for WSI analysis.
Specifically, we formulate the WSI as a heterogeneous graph with "nucleus-type" attribute to each node and a semantic attribute similarity to each edge.
Our framework outperforms the state-of-the-art methods with considerable margins on various tasks.
arXiv Detail & Related papers (2023-07-09T14:43:40Z) - Deconfounded Representation Similarity for Comparison of Neural Networks [16.23053104309891]
Similarity metrics are confounded by the population structure of data items in the input space.
We show that deconfounding the similarity metrics increases the resolution of detecting semantically similar neural networks.
arXiv Detail & Related papers (2022-01-31T21:25:02Z) - Multi-Scale Semantics-Guided Neural Networks for Efficient
Skeleton-Based Human Action Recognition [140.18376685167857]
A simple yet effective multi-scale semantics-guided neural network is proposed for skeleton-based action recognition.
MS-SGN achieves the state-of-the-art performance on the NTU60, NTU120, and SYSU datasets.
arXiv Detail & Related papers (2021-11-07T03:50:50Z) - Generalized Shape Metrics on Neural Representations [26.78835065137714]
We provide a family of metric spaces that quantify representational dissimilarity.
We modify existing representational similarity measures based on canonical correlation analysis to satisfy the triangle inequality.
We identify relationships between neural representations that are interpretable in terms of anatomical features and model performance.
arXiv Detail & Related papers (2021-10-27T19:48:55Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Out-of-distribution Generalization via Partial Feature Decorrelation [72.96261704851683]
We present a novel Partial Feature Decorrelation Learning (PFDL) algorithm, which jointly optimize a feature decomposition network and the target image classification model.
The experiments on real-world datasets demonstrate that our method can improve the backbone model's accuracy on OOD image classification datasets.
arXiv Detail & Related papers (2020-07-30T05:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.