Manifold Learning via Memory and Context
- URL: http://arxiv.org/abs/2407.09488v1
- Date: Fri, 17 May 2024 17:06:19 GMT
- Title: Manifold Learning via Memory and Context
- Authors: Xin Li,
- Abstract summary: We present a navigation-based approach to manifold learning using memory and context.
We name it navigation-based because our approach can be interpreted as navigating in the latent space of sensorimotor learning.
We discuss the biological implementation of our navigation-based learning by episodic and semantic memories in neural systems.
- Score: 5.234742752529437
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Given a memory with infinite capacity, can we solve the learning problem? Apparently, nature has solved this problem as evidenced by the evolution of mammalian brains. Inspired by the organizational principles underlying hippocampal-neocortical systems, we present a navigation-based approach to manifold learning using memory and context. The key insight is to navigate on the manifold and memorize the positions of each route as inductive/design bias of direct-fit-to-nature. We name it navigation-based because our approach can be interpreted as navigating in the latent space of sensorimotor learning via memory (local maps) and context (global indexing). The indexing to the library of local maps within global coordinates is collected by an associative memory serving as the librarian, which mimics the coupling between the hippocampus and the neocortex. In addition to breaking from the notorious bias-variance dilemma and the curse of dimensionality, we discuss the biological implementation of our navigation-based learning by episodic and semantic memories in neural systems. The energy efficiency of navigation-based learning makes it suitable for hardware implementation on non-von Neumann architectures, such as the emerging in-memory computing paradigm, including spiking neural networks and memristor neural networks.
Related papers
- Multi-Modal Cognitive Maps based on Neural Networks trained on Successor
Representations [3.4916237834391874]
Cognitive maps are a proposed concept on how the brain efficiently organizes memories and retrieves context out of them.
We set up a multi-modal neural network using successor representations which is able to model place cell dynamics and cognitive map representations.
The network learns the similarities between novel inputs and the training database and therefore the representation of the cognitive map successfully.
arXiv Detail & Related papers (2023-12-22T12:44:15Z) - Bio-inspired computational memory model of the Hippocampus: an approach
to a neuromorphic spike-based Content-Addressable Memory [0.0]
We propose a bio-inspired spiking content-addressable memory model based on the CA3 region of the hippocampus.
A set of experiments based on functional, stress and applicability tests were performed to demonstrate its correct functioning.
This work presents the first hardware implementation of a fully-functional bio-inspired spiking hippocampal content-addressable memory model.
arXiv Detail & Related papers (2023-10-09T17:05:23Z) - Bio-inspired spike-based Hippocampus and Posterior Parietal Cortex
models for robot navigation and environment pseudo-mapping [52.77024349608834]
This work proposes a spike-based robotic navigation and environment pseudomapping system.
The hippocampus is in charge of maintaining a representation of an environment state map, and the PPC is in charge of local decision-making.
This is the first implementation of an environment pseudo-mapping system with dynamic learning based on a bio-inspired hippocampal memory.
arXiv Detail & Related papers (2023-05-22T10:20:34Z) - Emergence of Maps in the Memories of Blind Navigation Agents [68.41901534985575]
Animal navigation research posits that organisms build and maintain internal spatial representations, or maps, of their environment.
We ask if machines -- specifically, artificial intelligence (AI) navigation agents -- also build implicit (or'mental') maps.
Unlike animal navigation, we can judiciously design the agent's perceptual system and control the learning paradigm to nullify alternative navigation mechanisms.
arXiv Detail & Related papers (2023-01-30T20:09:39Z) - Navigating to Objects in the Real World [76.1517654037993]
We present a large-scale empirical study of semantic visual navigation methods comparing methods from classical, modular, and end-to-end learning approaches.
We find that modular learning works well in the real world, attaining a 90% success rate.
In contrast, end-to-end learning does not, dropping from 77% simulation to 23% real-world success rate due to a large image domain gap between simulation and reality.
arXiv Detail & Related papers (2022-12-02T01:10:47Z) - A bio-inspired implementation of a sparse-learning spike-based
hippocampus memory model [0.0]
We propose a novel bio-inspired memory model based on the hippocampus.
It can learn memories, recall them from a cue and even forget memories when trying to learn others with the same cue.
This work presents the first hardware implementation of a fully functional bio-inspired spike-based hippocampus memory model.
arXiv Detail & Related papers (2022-06-10T07:48:29Z) - Memory-enriched computation and learning in spiking neural networks
through Hebbian plasticity [9.453554184019108]
Hebbian plasticity is believed to play a pivotal role in biological memory.
We introduce a novel spiking neural network architecture that is enriched by Hebbian synaptic plasticity.
We show that Hebbian enrichment renders spiking neural networks surprisingly versatile in terms of their computational as well as learning capabilities.
arXiv Detail & Related papers (2022-05-23T12:48:37Z) - Pin the Memory: Learning to Generalize Semantic Segmentation [68.367763672095]
We present a novel memory-guided domain generalization method for semantic segmentation based on meta-learning framework.
Our method abstracts the conceptual knowledge of semantic classes into categorical memory which is constant beyond the domains.
arXiv Detail & Related papers (2022-04-07T17:34:01Z) - Can the brain use waves to solve planning problems? [62.997667081978825]
We present a neural network model which can solve such tasks.
The model is compatible with a broad range of empirical findings about the mammalian neocortex and hippocampus.
arXiv Detail & Related papers (2021-10-11T11:07:05Z) - Memory and attention in deep learning [19.70919701635945]
Memory construction for machine is inevitable.
Recent progresses on modeling memory in deep learning have revolved around external memory constructions.
The aim of this thesis is to advance the understanding on memory and attention in deep learning.
arXiv Detail & Related papers (2021-07-03T09:21:13Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.