Duality of Bures and Shape Distances with Implications for Comparing
Neural Representations
- URL: http://arxiv.org/abs/2311.11436v1
- Date: Sun, 19 Nov 2023 22:17:09 GMT
- Title: Duality of Bures and Shape Distances with Implications for Comparing
Neural Representations
- Authors: Sarah E. Harvey, Brett W. Larsen, Alex H. Williams
- Abstract summary: A multitude of (dis)similarity measures between neural network representations have been proposed, resulting in a fragmented research landscape.
First, measures such as linear regression, canonical correlations analysis (CCA), and shape distances, all learn explicit mappings between neural units to quantify similarity.
Second, measures such as representational similarity analysis (RSA), centered kernel alignment (CKA), and normalized Bures similarity (NBS) all quantify similarity in summary statistics.
- Score: 6.698235069945606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A multitude of (dis)similarity measures between neural network
representations have been proposed, resulting in a fragmented research
landscape. Most of these measures fall into one of two categories.
First, measures such as linear regression, canonical correlations analysis
(CCA), and shape distances, all learn explicit mappings between neural units to
quantify similarity while accounting for expected invariances. Second, measures
such as representational similarity analysis (RSA), centered kernel alignment
(CKA), and normalized Bures similarity (NBS) all quantify similarity in summary
statistics, such as stimulus-by-stimulus kernel matrices, which are already
invariant to expected symmetries. Here, we take steps towards unifying these
two broad categories of methods by observing that the cosine of the Riemannian
shape distance (from category 1) is equal to NBS (from category 2). We explore
how this connection leads to new interpretations of shape distances and NBS,
and draw contrasts of these measures with CKA, a popular similarity measure in
the deep learning literature.
Related papers
- Evaluating Representational Similarity Measures from the Lens of Functional Correspondence [1.7811840395202345]
Neuroscience and artificial intelligence (AI) both face the challenge of interpreting high-dimensional neural data.
Despite the widespread use of representational comparisons, a critical question remains: which metrics are most suitable for these comparisons?
arXiv Detail & Related papers (2024-11-21T23:53:58Z) - What Representational Similarity Measures Imply about Decodable Information [6.5879381737929945]
We show that some neural network similarity measures can be equivalently motivated from a decoding perspective.
Measures like CKA and CCA quantify the average alignment between optimal linear readouts across a distribution of decoding tasks.
Overall, our work demonstrates a tight link between the geometry of neural representations and the ability to linearly decode information.
arXiv Detail & Related papers (2024-11-12T21:37:10Z) - Differentiable Optimization of Similarity Scores Between Models and Brains [1.5391321019692434]
Similarity measures such as linear regression, Centered Kernel Alignment (CKA), Normalized Bures Similarity (NBS), and angular Procrustes distance are often used to quantify this similarity.
Here, we introduce a novel tool to investigate what drives high similarity scores and what constitutes a "good" score.
Surprisingly, we find that high similarity scores do not guarantee encoding task-relevant information in a manner consistent with neural data.
arXiv Detail & Related papers (2024-07-09T17:31:47Z) - Cluster-Aware Similarity Diffusion for Instance Retrieval [64.40171728912702]
Diffusion-based re-ranking is a common method used for retrieving instances by performing similarity propagation in a nearest neighbor graph.
We propose a novel Cluster-Aware Similarity (CAS) diffusion for instance retrieval.
arXiv Detail & Related papers (2024-06-04T14:19:50Z) - A U-turn on Double Descent: Rethinking Parameter Counting in Statistical
Learning [68.76846801719095]
We show that double descent appears exactly when and where it occurs, and that its location is not inherently tied to the threshold p=n.
This provides a resolution to tensions between double descent and statistical intuition.
arXiv Detail & Related papers (2023-10-29T12:05:39Z) - Multilayer Multiset Neuronal Networks -- MMNNs [55.2480439325792]
The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
arXiv Detail & Related papers (2023-08-28T12:55:13Z) - Counting Like Human: Anthropoid Crowd Counting on Modeling the
Similarity of Objects [92.80955339180119]
mainstream crowd counting methods regress density map and integrate it to obtain counting results.
Inspired by this, we propose a rational and anthropoid crowd counting framework.
arXiv Detail & Related papers (2022-12-02T07:00:53Z) - Partial Shape Similarity via Alignment of Multi-Metric Hamiltonian
Spectra [10.74981839055037]
We propose a novel axiomatic method to match similar regions across shapes.
Matching similar regions is formulated as the alignment of the spectra of operators closely related to the Laplace-Beltrami operator (LBO)
We show that matching these dual spectra outperforms competing axiomatic frameworks when tested on standard benchmarks.
arXiv Detail & Related papers (2022-07-07T00:03:50Z) - Deconfounded Representation Similarity for Comparison of Neural Networks [16.23053104309891]
Similarity metrics are confounded by the population structure of data items in the input space.
We show that deconfounding the similarity metrics increases the resolution of detecting semantically similar neural networks.
arXiv Detail & Related papers (2022-01-31T21:25:02Z) - Kernel distance measures for time series, random fields and other
structured data [71.61147615789537]
kdiff is a novel kernel-based measure for estimating distances between instances of structured data.
It accounts for both self and cross similarities across the instances and is defined using a lower quantile of the distance distribution.
Some theoretical results are provided for separability conditions using kdiff as a distance measure for clustering and classification problems.
arXiv Detail & Related papers (2021-09-29T22:54:17Z) - Learning from Aggregate Observations [82.44304647051243]
We study the problem of learning from aggregate observations where supervision signals are given to sets of instances.
We present a general probabilistic framework that accommodates a variety of aggregate observations.
Simple maximum likelihood solutions can be applied to various differentiable models.
arXiv Detail & Related papers (2020-04-14T06:18:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.