Discretization of continuous input spaces in the hippocampal autoencoder
- URL: http://arxiv.org/abs/2405.14600v1
- Date: Thu, 23 May 2024 14:16:44 GMT
- Title: Discretization of continuous input spaces in the hippocampal autoencoder
- Authors: Adrian F. Amil, Ismael T. Freire, Paul F. M. J. Verschure,
- Abstract summary: We show that forming discrete memories of visual events in sparse autoencoder neurons can produce spatial tuning similar to hippocampal place cells.
We extend our results to the auditory domain, showing that neurons similarly tile the frequency space in an experience-dependent manner.
Lastly, we show that reinforcement learning agents can effectively perform various visuo-spatial cognitive tasks using these sparse, very high-dimensional representations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The hippocampus has been associated with both spatial cognition and episodic memory formation, but integrating these functions into a unified framework remains challenging. Here, we demonstrate that forming discrete memories of visual events in sparse autoencoder neurons can produce spatial tuning similar to hippocampal place cells. We then show that the resulting very high-dimensional code enables neurons to discretize and tile the underlying image space with minimal overlap. Additionally, we extend our results to the auditory domain, showing that neurons similarly tile the frequency space in an experience-dependent manner. Lastly, we show that reinforcement learning agents can effectively perform various visuo-spatial cognitive tasks using these sparse, very high-dimensional representations.
Related papers
- Spontaneous Spatial Cognition Emerges during Egocentric Video Viewing through Non-invasive BCI [42.53877172400408]
We show for the first time that non-invasive brain-computer interfaces can decode spontaneous, fine-grained egocentric 6D pose.<n>Despite EEG's limited spatial resolution and high signal noise, we find that spatially coherent visual input reliably evokes decodable spatial representations.
arXiv Detail & Related papers (2025-07-16T17:07:57Z) - Age Sensitive Hippocampal Functional Connectivity: New Insights from 3D CNNs and Saliency Mapping [55.27843586881593]
We develop an interpretable deep learning framework to predict brain age from hippocampal functional connectivity analysis.<n>Key hippocampal-cortical connections are mapped, particularly with the precuneus, cuneus, posterior cingulate cortex, parahippocampal cortex, left superior parietal lobule, and right superior temporal sulcus.<n>Findings provide new insights into the functional mechanisms of hippocampal aging and demonstrate the power of explainable deep learning to uncover biologically meaningful patterns in neuroimaging data.
arXiv Detail & Related papers (2025-07-02T07:05:18Z) - Latent Structured Hopfield Network for Semantic Association and Retrieval [52.634915010996835]
Episodic memory enables humans to recall past experiences by associating semantic elements such as objects, locations, and time into coherent event representations.<n>We propose the Latent Structured Hopfield Network (LSHN), a framework that integrates continuous Hopfield attractor dynamics into an autoencoder architecture.<n>Unlike traditional Hopfield networks, our model is trained end-to-end with gradient descent, achieving scalable and robust memory retrieval.
arXiv Detail & Related papers (2025-06-02T04:24:36Z) - Storing overlapping associative memories on latent manifolds in low-rank spiking networks [5.041384008847852]
We revisit the associative memory problem in light of advances in understanding spike-based computation.
We show that the spiking activity for a large class of all-inhibitory networks is situated on a low-dimensional, convex, and piecewise-linear manifold.
We propose several learning rules, and demonstrate a linear scaling of the storage capacity with the number of neurons, as well as robust pattern completion abilities.
arXiv Detail & Related papers (2024-11-26T14:48:25Z) - Hierarchical Working Memory and a New Magic Number [1.024113475677323]
We propose a recurrent neural network model for chunking within the framework of the synaptic theory of working memory.
Our work provides a novel conceptual and analytical framework for understanding the on-the-fly organization of information in the brain that is crucial for cognition.
arXiv Detail & Related papers (2024-08-14T16:03:47Z) - Time Makes Space: Emergence of Place Fields in Networks Encoding Temporally Continuous Sensory Experiences [0.7499722271664147]
We show that place cells emerge in networks trained to remember temporally continuous sensory episodes.
Place fields reproduce key aspects of hippocampal phenomenology.
arXiv Detail & Related papers (2024-08-11T15:17:11Z) - CrEIMBO: Cross Ensemble Interactions in Multi-view Brain Observations [3.3713037259290255]
CrEIMBO (Cross-Ensemble Interactions in Multi-view Brain Observations) identifies the composition of per-session neural ensembles.
CrEIMBO distinguishes session-specific from global (session-invariant) computations by exploring when distinct sub-circuits are active.
We demonstrate CrEIMBO's ability to recover ground truth components in synthetic data and uncover meaningful brain dynamics.
arXiv Detail & Related papers (2024-05-27T17:48:32Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Relating transformers to models and neural representations of the
hippocampal formation [0.7734726150561088]
One of the most exciting and promising novel architectures, the Transformer neural network, was developed without the brain in mind.
We show that transformers, when equipped with recurrent position encodings, replicate the precisely tuned spatial representations of the hippocampal formation.
This work continues to bind computations of artificial and brain networks, offers a novel understanding of the hippocampal-cortical interaction, and suggests how wider cortical areas may perform complex tasks beyond current neuroscience models such as language comprehension.
arXiv Detail & Related papers (2021-12-07T23:14:07Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Associative Memories via Predictive Coding [37.59398215921529]
Associative memories in the brain receive and store patterns of activity registered by the sensory neurons.
We present a novel neural model for realizing associative memories based on a hierarchical generative network that receives external stimuli via sensory neurons.
arXiv Detail & Related papers (2021-09-16T15:46:26Z) - Astrocytes mediate analogous memory in a multi-layer neuron-astrocytic
network [52.77024349608834]
We show how a piece of information can be maintained as a robust activity pattern for several seconds then completely disappear if no other stimuli come.
This kind of short-term memory can keep operative information for seconds, then completely forget it to avoid overlapping with forthcoming patterns.
We show how arbitrary patterns can be loaded, then stored for a certain interval of time, and retrieved if the appropriate clue pattern is applied to the input.
arXiv Detail & Related papers (2021-08-31T16:13:15Z) - Pure Exploration in Kernel and Neural Bandits [90.23165420559664]
We study pure exploration in bandits, where the dimension of the feature representation can be much larger than the number of arms.
To overcome the curse of dimensionality, we propose to adaptively embed the feature representation of each arm into a lower-dimensional space.
arXiv Detail & Related papers (2021-06-22T19:51:59Z) - HM4: Hidden Markov Model with Memory Management for Visual Place
Recognition [54.051025148533554]
We develop a Hidden Markov Model approach for visual place recognition in autonomous driving.
Our algorithm, dubbed HM$4$, exploits temporal look-ahead to transfer promising candidate images between passive storage and active memory.
We show that this allows constant time and space inference for a fixed coverage area.
arXiv Detail & Related papers (2020-11-01T08:49:24Z) - Evidential Sparsification of Multimodal Latent Spaces in Conditional
Variational Autoencoders [63.46738617561255]
We consider the problem of sparsifying the discrete latent space of a trained conditional variational autoencoder.
We use evidential theory to identify the latent classes that receive direct evidence from a particular input condition and filter out those that do not.
Experiments on diverse tasks, such as image generation and human behavior prediction, demonstrate the effectiveness of our proposed technique.
arXiv Detail & Related papers (2020-10-19T01:27:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.