Binding in hippocampal-entorhinal circuits enables compositionality in cognitive maps
- URL: http://arxiv.org/abs/2406.18808v1
- Date: Thu, 27 Jun 2024 00:53:53 GMT
- Title: Binding in hippocampal-entorhinal circuits enables compositionality in cognitive maps
- Authors: Christopher J. Kymn, Sonia Mazelet, Anthony Thomas, Denis Kleyko, E. Paxon Frady, Friedrich T. Sommer, Bruno A. Olshausen,
- Abstract summary: We propose a normative model for spatial representation in the hippocampal formation.
We show that the model achieves normative desiderata including superlinear scaling of patterns with dimension.
More generally, the model formalizes how compositional computations could occur in the hippocampal formation.
- Score: 8.679251532993428
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose a normative model for spatial representation in the hippocampal formation that combines optimality principles, such as maximizing coding range and spatial information per neuron, with an algebraic framework for computing in distributed representation. Spatial position is encoded in a residue number system, with individual residues represented by high-dimensional, complex-valued vectors. These are composed into a single vector representing position by a similarity-preserving, conjunctive vector-binding operation. Self-consistency between the representations of the overall position and of the individual residues is enforced by a modular attractor network whose modules correspond to the grid cell modules in entorhinal cortex. The vector binding operation can also associate different contexts to spatial representations, yielding a model for entorhinal cortex and hippocampus. We show that the model achieves normative desiderata including superlinear scaling of patterns with dimension, robust error correction, and hexagonal, carry-free encoding of spatial position. These properties in turn enable robust path integration and association with sensory inputs. More generally, the model formalizes how compositional computations could occur in the hippocampal formation and leads to testable experimental predictions.
Related papers
- Thinner Latent Spaces: Detecting dimension and imposing invariance through autoencoder gradient constraints [9.380902608139902]
We show that orthogonality relations within the latent layer of the network can be leveraged to infer the intrinsic dimensionality of nonlinear manifold data sets.
We outline the relevant theory relying on differential geometry, and describe the corresponding gradient-descent optimization algorithm.
arXiv Detail & Related papers (2024-08-28T20:56:35Z) - Neural Isometries: Taming Transformations for Equivariant ML [8.203292895010748]
We introduce Neural Isometries, an autoencoder framework which learns to map the observation space to a general-purpose latent space.
We show that a simple off-the-shelf equivariant network operating in the pre-trained latent space can achieve results on par with meticulously-engineered, handcrafted networks.
arXiv Detail & Related papers (2024-05-29T17:24:25Z) - AdaContour: Adaptive Contour Descriptor with Hierarchical Representation [52.381359663689004]
Existing angle-based contour descriptors suffer from lossy representation for non-star shapes.
AdaCon is able to represent shapes more accurately robustly than other descriptors.
arXiv Detail & Related papers (2024-04-12T07:30:24Z) - A Copula Graphical Model for Multi-Attribute Data using Optimal Transport [9.817170209575346]
We introduce a novel semiparametric multi-attribute graphical model based on a new copula named Cyclically Monotone Copula.
For the setting with high-dimensional attributes, a Projected Cyclically Monotone Copula model is proposed to address the curse of dimensionality issue.
arXiv Detail & Related papers (2024-04-10T04:49:00Z) - Emergence of Grid-like Representations by Training Recurrent Networks
with Conformal Normalization [48.99772993899573]
We study the emergence of hexagon grid patterns of grid cells based on a general recurrent neural network model.
We propose a simple yet general conformal normalization of the input velocity of the RNN.
We conduct extensive experiments to verify that conformal normalization is crucial for the emergence of hexagon grid patterns.
arXiv Detail & Related papers (2023-10-29T23:12:56Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Conformal Isometry of Lie Group Representation in Recurrent Network of
Grid Cells [52.425628028229156]
We study the properties of grid cells using recurrent network models.
We focus on a simple non-linear recurrent model that underlies the continuous attractor neural networks of grid cells.
arXiv Detail & Related papers (2022-10-06T05:26:49Z) - Frame Averaging for Equivariant Shape Space Learning [85.42901997467754]
A natural way to incorporate symmetries in shape space learning is to ask that the mapping to the shape space (encoder) and mapping from the shape space (decoder) are equivariant to the relevant symmetries.
We present a framework for incorporating equivariance in encoders and decoders by introducing two contributions.
arXiv Detail & Related papers (2021-12-03T06:41:19Z) - NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One
Go [109.88509362837475]
We present NeuroMorph, a new neural network architecture that takes as input two 3D shapes.
NeuroMorph produces smooth and point-to-point correspondences between them.
It works well for a large variety of input shapes, including non-isometric pairs from different object categories.
arXiv Detail & Related papers (2021-06-17T12:25:44Z) - Field Convolutions for Surface CNNs [19.897276088740995]
We present a novel surface convolution operator acting on vector fields based on a simple observation.
This formulation combines intrinsic spatial convolution with parallel transport in a scattering operation.
We achieve state-of-the-art results on standard benchmarks in fundamental geometry processing tasks.
arXiv Detail & Related papers (2021-04-08T17:11:14Z) - Autoencoder Image Interpolation by Shaping the Latent Space [12.482988592988868]
Autoencoders represent an effective approach for computing the underlying factors characterizing datasets of different types.
We propose a regularization technique that shapes the latent representation to follow a manifold consistent with the training images.
arXiv Detail & Related papers (2020-08-04T12:32:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.