Time Makes Space: Emergence of Place Fields in Networks Encoding Temporally Continuous Sensory Experiences
- URL: http://arxiv.org/abs/2408.05798v1
- Date: Sun, 11 Aug 2024 15:17:11 GMT
- Title: Time Makes Space: Emergence of Place Fields in Networks Encoding Temporally Continuous Sensory Experiences
- Authors: Zhaoze Wang, Ronald W. Di Tullio, Spencer Rooke, Vijay Balasubramanian,
- Abstract summary: We show that place cells emerge in networks trained to remember temporally continuous sensory episodes.
Place fields reproduce key aspects of hippocampal phenomenology.
- Score: 0.7499722271664147
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The vertebrate hippocampus is believed to use recurrent connectivity in area CA3 to support episodic memory recall from partial cues. This brain area also contains place cells, whose location-selective firing fields implement maps supporting spatial memory. Here we show that place cells emerge in networks trained to remember temporally continuous sensory episodes. We model CA3 as a recurrent autoencoder that recalls and reconstructs sensory experiences from noisy and partially occluded observations by agents traversing simulated rooms. The agents move in realistic trajectories modeled from rodents and environments are modeled as high-dimensional sensory experience maps. Training our autoencoder to pattern-complete and reconstruct experiences with a constraint on total activity causes spatially localized firing fields, i.e., place cells, to emerge in the encoding layer. The emergent place fields reproduce key aspects of hippocampal phenomenology: a) remapping (maintenance of and reversion to distinct learned maps in different environments), implemented via repositioning of experience manifolds in the network's hidden layer, b) orthogonality of spatial representations in different arenas, c) robust place field emergence in differently shaped rooms, with single units showing multiple place fields in large or complex spaces, and d) slow representational drift of place fields. We argue that these results arise because continuous traversal of space makes sensory experience temporally continuous. We make testable predictions: a) rapidly changing sensory context will disrupt place fields, b) place fields will form even if recurrent connections are blocked, but reversion to previously learned representations upon remapping will be abolished, c) the dimension of temporally smooth experience sets the dimensionality of place fields, including during virtual navigation of abstract spaces.
Related papers
- "What" x "When" working memory representations using Laplace Neural Manifolds [8.04565443575703]
Working memory can remember recent events as they recede continuously into the past.
CanRFRF neurons coding working memory show mixed with conjunctive stimuli.
We sketch a continuous attractorN that constructs a Laplace Manifold.
arXiv Detail & Related papers (2024-09-30T16:47:45Z) - Discretization of continuous input spaces in the hippocampal autoencoder [0.0]
We show that forming discrete memories of visual events in sparse autoencoder neurons can produce spatial tuning similar to hippocampal place cells.
We extend our results to the auditory domain, showing that neurons similarly tile the frequency space in an experience-dependent manner.
Lastly, we show that reinforcement learning agents can effectively perform various visuo-spatial cognitive tasks using these sparse, very high-dimensional representations.
arXiv Detail & Related papers (2024-05-23T14:16:44Z) - Spatially-Aware Transformer for Embodied Agents [20.498778205143477]
This paper explores the use of Spatially-Aware Transformer models that incorporate spatial information.
We demonstrate that memory utilization efficiency can be improved, leading to enhanced accuracy in various place-centric downstream tasks.
We also propose the Adaptive Memory Allocator, a memory management method based on reinforcement learning.
arXiv Detail & Related papers (2024-02-23T07:46:30Z) - Nothing Stands Still: A Spatiotemporal Benchmark on 3D Point Cloud
Registration Under Large Geometric and Temporal Change [86.44429778015657]
Building 3D geometric maps of man-made spaces are fundamental computer vision and robotics.
Nothing Stands Still (NSS) benchmark focuses on thetemporal registration of 3D scenes undergoing large spatial and temporal change.
As part of NSS, we introduce a dataset of 3D point clouds recurrently captured in large-scale building indoor environments that are under construction or renovation.
arXiv Detail & Related papers (2023-11-15T20:09:29Z) - DQnet: Cross-Model Detail Querying for Camouflaged Object Detection [54.82390534024954]
A convolutional neural network (CNN) for camouflaged object detection tends to activate local discriminative regions while ignoring complete object extent.
In this paper, we argue that partial activation is caused by the intrinsic characteristics of CNN.
In order to obtain feature maps that could activate full object extent, a novel framework termed Cross-Model Detail Querying network (DQnet) is proposed.
arXiv Detail & Related papers (2022-12-16T06:23:58Z) - Implicit Neural Spatial Representations for Time-dependent PDEs [29.404161110513616]
Implicit Neural Spatial Representation (INSR) has emerged as an effective representation of spatially-dependent vector fields.
This work explores solving time-dependent PDEs with INSR.
arXiv Detail & Related papers (2022-09-30T22:46:40Z) - Semi-signed neural fitting for surface reconstruction from unoriented
point clouds [53.379712818791894]
We propose SSN-Fitting to reconstruct a better signed distance field.
SSN-Fitting consists of a semi-signed supervision and a loss-based region sampling strategy.
We conduct experiments to demonstrate that SSN-Fitting achieves state-of-the-art performance under different settings.
arXiv Detail & Related papers (2022-06-14T09:40:17Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Organization of a Latent Space structure in VAE/GAN trained by
navigation data [0.0]
We present a novel artificial cognitive mapping system using generative deep neural networks (VAE/GAN)
We show that the distance of the predicted image is reflected in the distance of the corresponding latent vector after training.
The present study allows the network to internally generate temporal sequences analogous to hippocampal replay/pre-play.
arXiv Detail & Related papers (2021-02-03T03:13:26Z) - Evidential Sparsification of Multimodal Latent Spaces in Conditional
Variational Autoencoders [63.46738617561255]
We consider the problem of sparsifying the discrete latent space of a trained conditional variational autoencoder.
We use evidential theory to identify the latent classes that receive direct evidence from a particular input condition and filter out those that do not.
Experiments on diverse tasks, such as image generation and human behavior prediction, demonstrate the effectiveness of our proposed technique.
arXiv Detail & Related papers (2020-10-19T01:27:21Z) - CaSPR: Learning Canonical Spatiotemporal Point Cloud Representations [72.4716073597902]
We propose a method to learn object Canonical Point Cloud Representations of dynamically or moving objects.
We demonstrate the effectiveness of our method on several applications including shape reconstruction, camera pose estimation, continuoustemporal sequence reconstruction, and correspondence estimation.
arXiv Detail & Related papers (2020-08-06T17:58:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.