Discretization of continuous input spaces in the hippocampal autoencoder
- URL: http://arxiv.org/abs/2405.14600v1
- Date: Thu, 23 May 2024 14:16:44 GMT
- Title: Discretization of continuous input spaces in the hippocampal autoencoder
- Authors: Adrian F. Amil, Ismael T. Freire, Paul F. M. J. Verschure,
- Abstract summary: We show that forming discrete memories of visual events in sparse autoencoder neurons can produce spatial tuning similar to hippocampal place cells.
We extend our results to the auditory domain, showing that neurons similarly tile the frequency space in an experience-dependent manner.
Lastly, we show that reinforcement learning agents can effectively perform various visuo-spatial cognitive tasks using these sparse, very high-dimensional representations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The hippocampus has been associated with both spatial cognition and episodic memory formation, but integrating these functions into a unified framework remains challenging. Here, we demonstrate that forming discrete memories of visual events in sparse autoencoder neurons can produce spatial tuning similar to hippocampal place cells. We then show that the resulting very high-dimensional code enables neurons to discretize and tile the underlying image space with minimal overlap. Additionally, we extend our results to the auditory domain, showing that neurons similarly tile the frequency space in an experience-dependent manner. Lastly, we show that reinforcement learning agents can effectively perform various visuo-spatial cognitive tasks using these sparse, very high-dimensional representations.
Related papers
- Storing overlapping associative memories on latent manifolds in low-rank spiking networks [5.041384008847852]
We revisit the associative memory problem in light of advances in understanding spike-based computation.
We show that the spiking activity for a large class of all-inhibitory networks is situated on a low-dimensional, convex, and piecewise-linear manifold.
We propose several learning rules, and demonstrate a linear scaling of the storage capacity with the number of neurons, as well as robust pattern completion abilities.
arXiv Detail & Related papers (2024-11-26T14:48:25Z) - Hierarchical Working Memory and a New Magic Number [1.024113475677323]
We propose a recurrent neural network model for chunking within the framework of the synaptic theory of working memory.
Our work provides a novel conceptual and analytical framework for understanding the on-the-fly organization of information in the brain that is crucial for cognition.
arXiv Detail & Related papers (2024-08-14T16:03:47Z) - Time Makes Space: Emergence of Place Fields in Networks Encoding Temporally Continuous Sensory Experiences [0.7499722271664147]
We show that place cells emerge in networks trained to remember temporally continuous sensory episodes.
Place fields reproduce key aspects of hippocampal phenomenology.
arXiv Detail & Related papers (2024-08-11T15:17:11Z) - CrEIMBO: Cross Ensemble Interactions in Multi-view Brain Observations [3.3713037259290255]
CrEIMBO (Cross-Ensemble Interactions in Multi-view Brain Observations) identifies the composition of per-session neural ensembles.
CrEIMBO distinguishes session-specific from global (session-invariant) computations by exploring when distinct sub-circuits are active.
We demonstrate CrEIMBO's ability to recover ground truth components in synthetic data and uncover meaningful brain dynamics.
arXiv Detail & Related papers (2024-05-27T17:48:32Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Relating transformers to models and neural representations of the
hippocampal formation [0.7734726150561088]
One of the most exciting and promising novel architectures, the Transformer neural network, was developed without the brain in mind.
We show that transformers, when equipped with recurrent position encodings, replicate the precisely tuned spatial representations of the hippocampal formation.
This work continues to bind computations of artificial and brain networks, offers a novel understanding of the hippocampal-cortical interaction, and suggests how wider cortical areas may perform complex tasks beyond current neuroscience models such as language comprehension.
arXiv Detail & Related papers (2021-12-07T23:14:07Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Associative Memories via Predictive Coding [37.59398215921529]
Associative memories in the brain receive and store patterns of activity registered by the sensory neurons.
We present a novel neural model for realizing associative memories based on a hierarchical generative network that receives external stimuli via sensory neurons.
arXiv Detail & Related papers (2021-09-16T15:46:26Z) - Astrocytes mediate analogous memory in a multi-layer neuron-astrocytic
network [52.77024349608834]
We show how a piece of information can be maintained as a robust activity pattern for several seconds then completely disappear if no other stimuli come.
This kind of short-term memory can keep operative information for seconds, then completely forget it to avoid overlapping with forthcoming patterns.
We show how arbitrary patterns can be loaded, then stored for a certain interval of time, and retrieved if the appropriate clue pattern is applied to the input.
arXiv Detail & Related papers (2021-08-31T16:13:15Z) - Pure Exploration in Kernel and Neural Bandits [90.23165420559664]
We study pure exploration in bandits, where the dimension of the feature representation can be much larger than the number of arms.
To overcome the curse of dimensionality, we propose to adaptively embed the feature representation of each arm into a lower-dimensional space.
arXiv Detail & Related papers (2021-06-22T19:51:59Z) - HM4: Hidden Markov Model with Memory Management for Visual Place
Recognition [54.051025148533554]
We develop a Hidden Markov Model approach for visual place recognition in autonomous driving.
Our algorithm, dubbed HM$4$, exploits temporal look-ahead to transfer promising candidate images between passive storage and active memory.
We show that this allows constant time and space inference for a fixed coverage area.
arXiv Detail & Related papers (2020-11-01T08:49:24Z) - Evidential Sparsification of Multimodal Latent Spaces in Conditional
Variational Autoencoders [63.46738617561255]
We consider the problem of sparsifying the discrete latent space of a trained conditional variational autoencoder.
We use evidential theory to identify the latent classes that receive direct evidence from a particular input condition and filter out those that do not.
Experiments on diverse tasks, such as image generation and human behavior prediction, demonstrate the effectiveness of our proposed technique.
arXiv Detail & Related papers (2020-10-19T01:27:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.