Exploring Geometric Representational Alignment through Ollivier-Ricci Curvature and Ricci Flow
- URL: http://arxiv.org/abs/2501.00919v1
- Date: Wed, 01 Jan 2025 18:33:48 GMT
- Title: Exploring Geometric Representational Alignment through Ollivier-Ricci Curvature and Ricci Flow
- Authors: Nahid Torbati, Michael Gaebler, Simon M. Hofmann, Nico Scherf,
- Abstract summary: We use Ollivier-Ricci curvature and Ricci flow as tools to study the alignment of representations between humans and artificial neural systems.
As a proof-of-principle study, we compared the representations of face stimuli between VGG-Face, a human-aligned version of VGG-Face, and corresponding human similarity judgments from a large online study.
- Score: 0.0
- License:
- Abstract: Representational analysis explores how input data of a neural system are encoded in high dimensional spaces of its distributed neural activations, and how we can compare different systems, for instance, artificial neural networks and brains, on those grounds. While existing methods offer important insights, they typically do not account for local intrinsic geometrical properties within the high-dimensional representation spaces. To go beyond these limitations, we explore Ollivier-Ricci curvature and Ricci flow as tools to study the alignment of representations between humans and artificial neural systems on a geometric level. As a proof-of-principle study, we compared the representations of face stimuli between VGG-Face, a human-aligned version of VGG-Face, and corresponding human similarity judgments from a large online study. Using this discrete geometric framework, we were able to identify local structural similarities and differences by examining the distributions of node and edge curvature and higher-level properties by detecting and comparing community structure in the representational graphs.
Related papers
- Revealing Bias Formation in Deep Neural Networks Through the Geometric Mechanisms of Human Visual Decoupling [9.609083308026786]
Deep neural networks (DNNs) often exhibit biases toward certain categories during object recognition.
We propose a geometric analysis framework linking the geometric complexity of class-specific perceptual Manifolds to model bias.
We present the Perceptual-Manifold-Geometry library, designed for calculating the geometric properties of perceptual Manifolds.
arXiv Detail & Related papers (2025-02-17T13:54:02Z) - Geometry Distributions [51.4061133324376]
We propose a novel geometric data representation that models geometry as distributions.
Our approach uses diffusion models with a novel network architecture to learn surface point distributions.
We evaluate our representation qualitatively and quantitatively across various object types, demonstrating its effectiveness in achieving high geometric fidelity.
arXiv Detail & Related papers (2024-11-25T04:06:48Z) - Exploring the Manifold of Neural Networks Using Diffusion Geometry [7.038126249994092]
We learn manifold where datapoints are neural networks by introducing a distance between the hidden layer representations of the neural networks.
These distances are then fed to the non-linear dimensionality reduction algorithm PHATE to create a manifold of neural networks.
Our analysis reveals that high-performing networks cluster together in the manifold, displaying consistent embedding patterns.
arXiv Detail & Related papers (2024-11-19T16:34:45Z) - Representational dissimilarity metric spaces for stochastic neural
networks [4.229248343585332]
Quantifying similarity between neural representations is a perennial problem in deep learning and neuroscience research.
We generalize shape metrics to quantify differences in representations.
We find that neurobiological oriented visual gratings and naturalistic scenes respectively resemble untrained and trained deep network representations.
arXiv Detail & Related papers (2022-11-21T17:32:40Z) - Topology and geometry of data manifold in deep learning [0.0]
This article describes and substantiates the geometric and topological view of the learning process of neural networks.
We present a wide range of experiments on different datasets and different configurations of convolutional neural network architectures.
Our work is a contribution to the development of an important area of explainable and interpretable AI through the example of computer vision.
arXiv Detail & Related papers (2022-04-19T02:57:47Z) - An explainability framework for cortical surface-based deep learning [110.83289076967895]
We develop a framework for cortical surface-based deep learning.
First, we adapted a perturbation-based approach for use with surface data.
We show that our explainability framework is not only able to identify important features and their spatial location but that it is also reliable and valid.
arXiv Detail & Related papers (2022-03-15T23:16:49Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Generalized Shape Metrics on Neural Representations [26.78835065137714]
We provide a family of metric spaces that quantify representational dissimilarity.
We modify existing representational similarity measures based on canonical correlation analysis to satisfy the triangle inequality.
We identify relationships between neural representations that are interpretable in terms of anatomical features and model performance.
arXiv Detail & Related papers (2021-10-27T19:48:55Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Emergence of Lie symmetries in functional architectures learned by CNNs [63.69764116066748]
We study the spontaneous development of symmetries in the early layers of a Convolutional Neural Network (CNN) during learning on natural images.
Our architecture is built in such a way to mimic the early stages of biological visual systems.
arXiv Detail & Related papers (2021-04-17T13:23:26Z) - Neural Topological SLAM for Visual Navigation [112.73876869904]
We design topological representations for space that leverage semantics and afford approximate geometric reasoning.
We describe supervised learning-based algorithms that can build, maintain and use such representations under noisy actuation.
arXiv Detail & Related papers (2020-05-25T17:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.