Multi-Modal Cognitive Maps based on Neural Networks trained on Successor
Representations
- URL: http://arxiv.org/abs/2401.01364v1
- Date: Fri, 22 Dec 2023 12:44:15 GMT
- Title: Multi-Modal Cognitive Maps based on Neural Networks trained on Successor
Representations
- Authors: Paul Stoewer, Achim Schilling, Andreas Maier and Patrick Krauss
- Abstract summary: Cognitive maps are a proposed concept on how the brain efficiently organizes memories and retrieves context out of them.
We set up a multi-modal neural network using successor representations which is able to model place cell dynamics and cognitive map representations.
The network learns the similarities between novel inputs and the training database and therefore the representation of the cognitive map successfully.
- Score: 3.4916237834391874
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Cognitive maps are a proposed concept on how the brain efficiently organizes
memories and retrieves context out of them. The entorhinal-hippocampal complex
is heavily involved in episodic and relational memory processing, as well as
spatial navigation and is thought to built cognitive maps via place and grid
cells. To make use of the promising properties of cognitive maps, we set up a
multi-modal neural network using successor representations which is able to
model place cell dynamics and cognitive map representations. Here, we use
multi-modal inputs consisting of images and word embeddings. The network learns
the similarities between novel inputs and the training database and therefore
the representation of the cognitive map successfully. Subsequently, the
prediction of the network can be used to infer from one modality to another
with over $90\%$ accuracy. The proposed method could therefore be a building
block to improve current AI systems for better understanding of the environment
and the different modalities in which objects appear. The association of
specific modalities with certain encounters can therefore lead to context
awareness in novel situations when similar encounters with less information
occur and additional information can be inferred from the learned cognitive
map. Cognitive maps, as represented by the entorhinal-hippocampal complex in
the brain, organize and retrieve context from memories, suggesting that large
language models (LLMs) like ChatGPT could harness similar architectures to
function as a high-level processing center, akin to how the hippocampus
operates within the cortex hierarchy. Finally, by utilizing multi-modal inputs,
LLMs can potentially bridge the gap between different forms of data (like
images and words), paving the way for context-awareness and grounding of
abstract concepts through learned associations, addressing the grounding
problem in AI.
Related papers
- Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Finding Concept Representations in Neural Networks with Self-Organizing
Maps [2.817412580574242]
We show how self-organizing maps can be used to inspect how activation of layers of neural networks correspond to neural representations of abstract concepts.
We show that, among the measures tested, the relative entropy of the activation map for a concept is a suitable candidate and can be used as part of a methodology to identify and locate the neural representation of a concept.
arXiv Detail & Related papers (2023-12-10T12:10:34Z) - Chat2Brain: A Method for Mapping Open-Ended Semantic Queries to Brain
Activation Maps [59.648646222905235]
We propose a method called Chat2Brain that combines LLMs to basic text-2-image model, known as Text2Brain, to map semantic queries to brain activation maps.
We demonstrate that Chat2Brain can synthesize plausible neural activation patterns for more complex tasks of text queries.
arXiv Detail & Related papers (2023-09-10T13:06:45Z) - Conceptual Cognitive Maps Formation with Neural Successor Networks and
Word Embeddings [7.909848251752742]
We introduce a model that employs successor representations and neural networks, along with word embedding, to construct a cognitive map of three separate concepts.
The network adeptly learns two different scaled maps and situates new information in proximity to related pre-existing representations.
We suggest that our model could potentially improve current AI models by providing multi-modal context information to any input.
arXiv Detail & Related papers (2023-07-04T09:11:01Z) - Language Knowledge-Assisted Representation Learning for Skeleton-Based
Action Recognition [71.35205097460124]
How humans understand and recognize the actions of others is a complex neuroscientific problem.
LA-GCN proposes a graph convolution network using large-scale language models (LLM) knowledge assistance.
arXiv Detail & Related papers (2023-05-21T08:29:16Z) - Neural Network based Formation of Cognitive Maps of Semantic Spaces and
the Emergence of Abstract Concepts [7.909848251752742]
We present a neural network, which learns a cognitive map of a semantic space based on 32 different animal species encoded as feature vectors.
The network constructs a cognitive map of 'animal space' based on the principle of successor representations with an accuracy of around 30%.
We find that, in fine-grained cognitive maps, the animal vectors are evenly distributed in feature space. In contrast, in coarse-grained maps, animal vectors are highly clustered according to their biological class.
arXiv Detail & Related papers (2022-10-28T11:16:33Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - Neural Network based Successor Representations of Space and Language [6.748976209131109]
We present a neural network based approach to learn multi-scale successor representations of structured knowledge.
In all scenarios, the neural network correctly learns and approximates the underlying structure by building successor representations.
We conclude that cognitive maps and neural network-based successor representations of structured knowledge provide a promising way to overcome some of the short comings of deep learning towards artificial general intelligence.
arXiv Detail & Related papers (2022-02-22T21:52:46Z) - Can the brain use waves to solve planning problems? [62.997667081978825]
We present a neural network model which can solve such tasks.
The model is compatible with a broad range of empirical findings about the mammalian neocortex and hippocampus.
arXiv Detail & Related papers (2021-10-11T11:07:05Z) - CogAlign: Learning to Align Textual Neural Representations to Cognitive
Language Processing Signals [60.921888445317705]
We propose a CogAlign approach to integrate cognitive language processing signals into natural language processing models.
We show that CogAlign achieves significant improvements with multiple cognitive features over state-of-the-art models on public datasets.
arXiv Detail & Related papers (2021-06-10T07:10:25Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.