Correcting Biased Centered Kernel Alignment Measures in Biological and Artificial Neural Networks
- URL: http://arxiv.org/abs/2405.01012v1
- Date: Thu, 2 May 2024 05:27:12 GMT
- Title: Correcting Biased Centered Kernel Alignment Measures in Biological and Artificial Neural Networks
- Authors: Alex Murphy, Joel Zylberberg, Alona Fyshe,
- Abstract summary: Centred Kernel Alignment (CKA) has recently emerged as a popular metric to compare activations from biological and artificial neural networks (ANNs)
In this paper we highlight issues that the community should take into account if using CKA as an alignment metric with neural data.
- Score: 4.437949196235149
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Centred Kernel Alignment (CKA) has recently emerged as a popular metric to compare activations from biological and artificial neural networks (ANNs) in order to quantify the alignment between internal representations derived from stimuli sets (e.g. images, text, video) that are presented to both systems. In this paper we highlight issues that the community should take into account if using CKA as an alignment metric with neural data. Neural data are in the low-data high-dimensionality domain, which is one of the cases where (biased) CKA results in high similarity scores even for pairs of random matrices. Using fMRI and MEG data from the THINGS project, we show that if biased CKA is applied to representations of different sizes in the low-data high-dimensionality domain, they are not directly comparable due to biased CKA's sensitivity to differing feature-sample ratios and not stimuli-driven responses. This situation can arise both when comparing a pre-selected area of interest (e.g. ROI) to multiple ANN layers, as well as when determining to which ANN layer multiple regions of interest (ROIs) / sensor groups of different dimensionality are most similar. We show that biased CKA can be artificially driven to its maximum value when using independent random data of different sample-feature ratios. We further show that shuffling sample-feature pairs of real neural data does not drastically alter biased CKA similarity in comparison to unshuffled data, indicating an undesirable lack of sensitivity to stimuli-driven neural responses. Positive alignment of true stimuli-driven responses is only achieved by using debiased CKA. Lastly, we report findings that suggest biased CKA is sensitive to the inherent structure of neural data, only differing from shuffled data when debiased CKA detects stimuli-driven alignment.
Related papers
- Differentiable Optimization of Similarity Scores Between Models and Brains [1.5391321019692434]
Similarity measures such as linear regression, Centered Kernel Alignment (CKA), Normalized Bures Similarity (NBS), and angular Procrustes distance are often used to quantify this similarity.
Here, we introduce a novel tool to investigate what drives high similarity scores and what constitutes a "good" score.
Surprisingly, we find that high similarity scores do not guarantee encoding task-relevant information in a manner consistent with neural data.
arXiv Detail & Related papers (2024-07-09T17:31:47Z) - Multilayer Multiset Neuronal Networks -- MMNNs [55.2480439325792]
The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
arXiv Detail & Related papers (2023-08-28T12:55:13Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Linking convolutional kernel size to generalization bias in face
analysis CNNs [9.030335233143603]
We present a causal framework for linking an architectural hyper parameter to out-of-distribution algorithmic bias.
In our experiments, we focused on measuring the causal relationship between convolutional kernel size and face analysis classification bias.
We show that modifying kernel size, even in one layer of a CNN, changes the frequency content of learned features significantly across data subgroups.
arXiv Detail & Related papers (2023-02-07T20:55:09Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Interpreting Bias in the Neural Networks: A Peek Into Representational
Similarity [0.0]
We investigate the performance and internal representational structure of convolution-based neural networks trained on biased data.
We specifically study similarities in representations, using Centered Kernel Alignment (CKA) for different objective functions.
We note that without progressive representational similarities among the layers of a neural network, the performance is less likely to be robust.
arXiv Detail & Related papers (2022-11-14T22:17:14Z) - Reliability of CKA as a Similarity Measure in Deep Learning [17.555458413538233]
We present analysis that characterizes CKA sensitivity to a large class of simple transformations.
We investigate several weaknesses of the CKA similarity metric, demonstrating situations in which it gives unexpected or counter-intuitive results.
Our results illustrate that, in many cases, the CKA value can be easily manipulated without substantial changes to the functional behaviour of the models.
arXiv Detail & Related papers (2022-10-28T14:32:52Z) - Shift Invariance Can Reduce Adversarial Robustness [20.199887291186364]
Shift invariance is a critical property of CNNs that improves performance on classification.
We show that invariance to circular shifts can also lead to greater sensitivity to adversarial attacks.
arXiv Detail & Related papers (2021-03-03T21:27:56Z) - Decorrelated Clustering with Data Selection Bias [55.91842043124102]
We propose a novel Decorrelation regularized K-Means algorithm (DCKM) for clustering with data selection bias.
Our DCKM algorithm achieves significant performance gains, indicating the necessity of removing unexpected feature correlations induced by selection bias.
arXiv Detail & Related papers (2020-06-29T08:55:50Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.