Differentiable Optimization of Similarity Scores Between Models and Brains
- URL: http://arxiv.org/abs/2407.07059v1
- Date: Tue, 9 Jul 2024 17:31:47 GMT
- Title: Differentiable Optimization of Similarity Scores Between Models and Brains
- Authors: Nathan Cloos, Moufan Li, Markus Siegel, Scott L. Brincat, Earl K. Miller, Guangyu Robert Yang, Christopher J. Cueva,
- Abstract summary: We analyze neural activity recorded in five experiments on nonhuman primates.
We find that some measures like linear regression and CKA, differ from angular Procrustes.
We show in both theory and simulations how these scores change when different principal components are perturbed.
- Score: 1.5391321019692434
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: What metrics should guide the development of more realistic models of the brain? One proposal is to quantify the similarity between models and brains using methods such as linear regression, Centered Kernel Alignment (CKA), and angular Procrustes distance. To better understand the limitations of these similarity measures we analyze neural activity recorded in five experiments on nonhuman primates, and optimize synthetic datasets to become more similar to these neural recordings. How similar can these synthetic datasets be to neural activity while failing to encode task relevant variables? We find that some measures like linear regression and CKA, differ from angular Procrustes, and yield high similarity scores even when task relevant variables cannot be linearly decoded from the synthetic datasets. Synthetic datasets optimized to maximize similarity scores initially learn the first principal component of the target dataset, but angular Procrustes captures higher variance dimensions much earlier than methods like linear regression and CKA. We show in both theory and simulations how these scores change when different principal components are perturbed. And finally, we jointly optimize multiple similarity scores to find their allowed ranges, and show that a high angular Procrustes similarity, for example, implies a high CKA score, but not the converse.
Related papers
- Context-Aware Palmprint Recognition via a Relative Similarity Metric [0.0]
We propose a new approach to matching mechanism for palmprint recognition by introducing a Relative Similarity Metric (RSM)
RSM captures how a pairwise similarity compares within the context of the entire dataset.
Our method achieves a new state-of-the-art 0.000036% Equal Error Rate (EER) on the Tongji dataset, outperforming previous methods.
arXiv Detail & Related papers (2025-04-15T15:46:17Z) - Evaluating Representational Similarity Measures from the Lens of Functional Correspondence [1.7811840395202345]
Neuroscience and artificial intelligence (AI) both face the challenge of interpreting high-dimensional neural data.
Despite the widespread use of representational comparisons, a critical question remains: which metrics are most suitable for these comparisons?
arXiv Detail & Related papers (2024-11-21T23:53:58Z) - Measuring similarity between embedding spaces using induced neighborhood graphs [10.056989400384772]
We propose a metric to evaluate the similarity between paired item representations.
Our results show that accuracy in both analogy and zero-shot classification tasks correlates with the embedding similarity.
arXiv Detail & Related papers (2024-11-13T15:22:33Z) - What Representational Similarity Measures Imply about Decodable Information [6.5879381737929945]
We show that some neural network similarity measures can be equivalently motivated from a decoding perspective.
Measures like CKA and CCA quantify the average alignment between optimal linear readouts across a distribution of decoding tasks.
Overall, our work demonstrates a tight link between the geometry of neural representations and the ability to linearly decode information.
arXiv Detail & Related papers (2024-11-12T21:37:10Z) - Multilayer Multiset Neuronal Networks -- MMNNs [55.2480439325792]
The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
arXiv Detail & Related papers (2023-08-28T12:55:13Z) - Rethinking k-means from manifold learning perspective [122.38667613245151]
We present a new clustering algorithm which directly detects clusters of data without mean estimation.
Specifically, we construct distance matrix between data points by Butterworth filter.
To well exploit the complementary information embedded in different views, we leverage the tensor Schatten p-norm regularization.
arXiv Detail & Related papers (2023-05-12T03:01:41Z) - Reliability of CKA as a Similarity Measure in Deep Learning [17.555458413538233]
We present analysis that characterizes CKA sensitivity to a large class of simple transformations.
We investigate several weaknesses of the CKA similarity metric, demonstrating situations in which it gives unexpected or counter-intuitive results.
Our results illustrate that, in many cases, the CKA value can be easily manipulated without substantial changes to the functional behaviour of the models.
arXiv Detail & Related papers (2022-10-28T14:32:52Z) - Efficient Approximate Kernel Based Spike Sequence Classification [56.2938724367661]
Machine learning models, such as SVM, require a definition of distance/similarity between pairs of sequences.
Exact methods yield better classification performance, but they pose high computational costs.
We propose a series of ways to improve the performance of the approximate kernel in order to enhance its predictive performance.
arXiv Detail & Related papers (2022-09-11T22:44:19Z) - Towards Similarity-Aware Time-Series Classification [51.2400839966489]
We study time-series classification (TSC), a fundamental task of time-series data mining.
We propose Similarity-Aware Time-Series Classification (SimTSC), a framework that models similarity information with graph neural networks (GNNs)
arXiv Detail & Related papers (2022-01-05T02:14:57Z) - Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic
Uncertainty [58.144520501201995]
Bi-Lipschitz regularization of neural network layers preserve relative distances between data instances in the feature spaces of each layer.
With the use of an attentive set encoder, we propose to meta learn either diagonal or diagonal plus low-rank factors to efficiently construct task specific covariance matrices.
We also propose an inference procedure which utilizes scaled energy to achieve a final predictive distribution.
arXiv Detail & Related papers (2021-10-12T22:04:19Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - Learning similarity measures from data [1.4766350834632755]
Defining similarity measures is a requirement for some machine learning methods.
Data sets are typically gathered as part of constructing a CBR or machine learning system.
Our objective is to investigate how to apply machine learning to effectively learn a similarity measure.
arXiv Detail & Related papers (2020-01-15T13:29:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.