Generalized Shape Metrics on Neural Representations
- URL: http://arxiv.org/abs/2110.14739v1
- Date: Wed, 27 Oct 2021 19:48:55 GMT
- Title: Generalized Shape Metrics on Neural Representations
- Authors: Alex H. Williams and Erin Kunz and Simon Kornblith and Scott W.
Linderman
- Abstract summary: We provide a family of metric spaces that quantify representational dissimilarity.
We modify existing representational similarity measures based on canonical correlation analysis to satisfy the triangle inequality.
We identify relationships between neural representations that are interpretable in terms of anatomical features and model performance.
- Score: 26.78835065137714
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding the operation of biological and artificial networks remains a
difficult and important challenge. To identify general principles, researchers
are increasingly interested in surveying large collections of networks that are
trained on, or biologically adapted to, similar tasks. A standardized set of
analysis tools is now needed to identify how network-level covariates -- such
as architecture, anatomical brain region, and model organism -- impact neural
representations (hidden layer activations). Here, we provide a rigorous
foundation for these analyses by defining a broad family of metric spaces that
quantify representational dissimilarity. Using this framework we modify
existing representational similarity measures based on canonical correlation
analysis to satisfy the triangle inequality, formulate a novel metric that
respects the inductive biases in convolutional layers, and identify approximate
Euclidean embeddings that enable network representations to be incorporated
into essentially any off-the-shelf machine learning method. We demonstrate
these methods on large-scale datasets from biology (Allen Institute Brain
Observatory) and deep learning (NAS-Bench-101). In doing so, we identify
relationships between neural representations that are interpretable in terms of
anatomical features and model performance.
Related papers
- Topological Representational Similarity Analysis in Brains and Beyond [15.417809900388262]
This thesis introduces Topological RSA (tRSA), a novel framework combining geometric and topological properties of neural representations.
tRSA applies nonlinear monotonic transforms to representational dissimilarities, emphasizing local topology while retaining intermediate-scale geometry.
The resulting geo-topological matrices enable model comparisons robust to noise and individual idiosyncrasies.
arXiv Detail & Related papers (2024-08-21T19:02:00Z) - Relational Composition in Neural Networks: A Survey and Call to Action [54.47858085003077]
Many neural nets appear to represent data as linear combinations of "feature vectors"
We argue that this success is incomplete without an understanding of relational composition.
arXiv Detail & Related papers (2024-07-19T20:50:57Z) - Pruning neural network models for gene regulatory dynamics using data and domain knowledge [24.670514977455202]
We propose DASH, a framework that guides network pruning by using domain-specific structural information in model fitting.
We show that DASH, using knowledge about gene interaction partners within the putative regulatory network, outperforms general pruning methods by a large margin.
arXiv Detail & Related papers (2024-03-05T23:02:55Z) - Probing Biological and Artificial Neural Networks with Task-dependent
Neural Manifolds [12.037840490243603]
We investigate the internal mechanisms of neural networks through the lens of neural population geometry.
We quantitatively characterize how different learning objectives lead to differences in the organizational strategies of these models.
These analyses present a strong direction for bridging mechanistic and normative theories in neural networks through neural population geometry.
arXiv Detail & Related papers (2023-12-21T20:40:51Z) - Experimental Observations of the Topology of Convolutional Neural
Network Activations [2.4235626091331737]
Topological data analysis provides compact, noise-robust representations of complex structures.
Deep neural networks (DNNs) learn millions of parameters associated with a series of transformations defined by the model architecture.
In this paper, we apply cutting edge techniques from TDA with the goal of gaining insight into the interpretability of convolutional neural networks used for image classification.
arXiv Detail & Related papers (2022-12-01T02:05:44Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - An explainability framework for cortical surface-based deep learning [110.83289076967895]
We develop a framework for cortical surface-based deep learning.
First, we adapted a perturbation-based approach for use with surface data.
We show that our explainability framework is not only able to identify important features and their spatial location but that it is also reliable and valid.
arXiv Detail & Related papers (2022-03-15T23:16:49Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Complexity for deep neural networks and other characteristics of deep
feature representations [0.0]
We define a notion of complexity, which quantifies the nonlinearity of the computation of a neural network.
We investigate these observables both for trained networks as well as explore their dynamics during training.
arXiv Detail & Related papers (2020-06-08T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.