A Grid Cell-Inspired Structured Vector Algebra for Cognitive Maps
- URL: http://arxiv.org/abs/2503.08608v1
- Date: Tue, 11 Mar 2025 16:45:52 GMT
- Title: A Grid Cell-Inspired Structured Vector Algebra for Cognitive Maps
- Authors: Sven Krausse, Emre Neftci, Friedrich T. Sommer, Alpha Renner,
- Abstract summary: The entorhinal-hippocampal formation is the mammalian brain's navigation system, encoding both physical and abstract spaces via grid cells.<n>Here, we propose a mechanistic model for versatile information processing in the entorhinal-hippocampal formation inspired by CANs and Vector Architectures (VSAs)<n>The novel grid-cell VSA model employs a spatially structured encoding scheme with 3D modules mimicking the discrete scales and orientations of grid cell modules.
- Score: 4.498459787490856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The entorhinal-hippocampal formation is the mammalian brain's navigation system, encoding both physical and abstract spaces via grid cells. This system is well-studied in neuroscience, and its efficiency and versatility make it attractive for applications in robotics and machine learning. While continuous attractor networks (CANs) successfully model entorhinal grid cells for encoding physical space, integrating both continuous spatial and abstract spatial computations into a unified framework remains challenging. Here, we attempt to bridge this gap by proposing a mechanistic model for versatile information processing in the entorhinal-hippocampal formation inspired by CANs and Vector Symbolic Architectures (VSAs), a neuro-symbolic computing framework. The novel grid-cell VSA (GC-VSA) model employs a spatially structured encoding scheme with 3D neuronal modules mimicking the discrete scales and orientations of grid cell modules, reproducing their characteristic hexagonal receptive fields. In experiments, the model demonstrates versatility in spatial and abstract tasks: (1) accurate path integration for tracking locations, (2) spatio-temporal representation for querying object locations and temporal relations, and (3) symbolic reasoning using family trees as a structured test case for hierarchical relationships.
Related papers
- A Geometry-Aware Message Passing Neural Network for Modeling Aerodynamics over Airfoils [61.60175086194333]
aerodynamics is a key problem in aerospace engineering, often involving flows interacting with solid objects such as airfoils.<n>Here, we consider modeling of incompressible flows over solid objects, wherein geometric structures are a key factor in determining aerodynamics.<n>To effectively incorporate geometries, we propose a message passing scheme that efficiently and expressively integrates the airfoil shape with the mesh representation.<n>These design choices lead to a purely data-driven machine learning framework known as GeoMPNN, which won the Best Student Submission award at the NeurIPS 2024 ML4CFD Competition, placing 4th overall.
arXiv Detail & Related papers (2024-12-12T16:05:39Z) - Self Supervised Networks for Learning Latent Space Representations of Human Body Scans and Motions [6.165163123577484]
This paper introduces self-supervised neural network models to tackle several fundamental problems in the field of 3D human body analysis and processing.
We propose VariShaPE, a novel architecture for the retrieval of latent space representations of body shapes and poses.
Second, we complement the estimation of latent codes with MoGeN, a framework that learns the geometry on the latent space itself.
arXiv Detail & Related papers (2024-11-05T19:59:40Z) - GridPE: Unifying Positional Encoding in Transformers with a Grid Cell-Inspired Framework [6.192516215592685]
We introduce a novel positional encoding scheme inspired by Fourier analysis and the latest findings in computational neuroscience regarding grid cells.
We derive an optimal grid scale ratio for spatial multi-dimensional spaces based on principles of biological efficiency.
Our theoretical analysis shows that GridPE provides a unifying framework for positional encoding in arbitrary high-dimensional spaces.
arXiv Detail & Related papers (2024-06-11T08:25:11Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Self-Supervised Learning of Representations for Space Generates
Multi-Modular Grid Cells [16.208253624969142]
mammalian lineage has developed striking spatial representations.
One important spatial representation is the Nobel-prize winning grid cells.
Nobel-prize winning grid cells represent self-location, a local and aperiodic quantity.
arXiv Detail & Related papers (2023-11-04T03:59:37Z) - A Topological Deep Learning Framework for Neural Spike Decoding [1.0062127381149395]
Two of the ways brains encode spatial information is through head direction cells and grid cells.
We develop a topological deep learning framework for neural spike train decoding.
arXiv Detail & Related papers (2022-12-01T15:43:20Z) - Testing geometric representation hypotheses from simulated place cell
recordings [3.1498833540989413]
Hippocampal place cells can encode spatial locations of an animal in physical or task-relevant spaces.
We simulated place cell populations that encoded either Euclidean- or graph-based positions of a rat navigating to goal nodes in a maze with a graph topology.
arXiv Detail & Related papers (2022-11-16T18:29:17Z) - Conformal Isometry of Lie Group Representation in Recurrent Network of
Grid Cells [52.425628028229156]
We study the properties of grid cells using recurrent network models.
We focus on a simple non-linear recurrent model that underlies the continuous attractor neural networks of grid cells.
arXiv Detail & Related papers (2022-10-06T05:26:49Z) - Deep Representations for Time-varying Brain Datasets [4.129225533930966]
This paper builds an efficient graph neural network model that incorporates both region-mapped fMRI sequences and structural connectivities as inputs.
We find good representations of the latent brain dynamics through learning sample-level adaptive adjacency matrices.
These modules can be easily adapted to and are potentially useful for other applications outside the neuroscience domain.
arXiv Detail & Related papers (2022-05-23T21:57:31Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Neural Topological SLAM for Visual Navigation [112.73876869904]
We design topological representations for space that leverage semantics and afford approximate geometric reasoning.
We describe supervised learning-based algorithms that can build, maintain and use such representations under noisy actuation.
arXiv Detail & Related papers (2020-05-25T17:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.