Neural Network based Formation of Cognitive Maps of Semantic Spaces and
the Emergence of Abstract Concepts
- URL: http://arxiv.org/abs/2210.16062v1
- Date: Fri, 28 Oct 2022 11:16:33 GMT
- Title: Neural Network based Formation of Cognitive Maps of Semantic Spaces and
the Emergence of Abstract Concepts
- Authors: Paul Stoewer, Achim Schilling, Andreas Maier, Patrick Krauss
- Abstract summary: We present a neural network, which learns a cognitive map of a semantic space based on 32 different animal species encoded as feature vectors.
The network constructs a cognitive map of 'animal space' based on the principle of successor representations with an accuracy of around 30%.
We find that, in fine-grained cognitive maps, the animal vectors are evenly distributed in feature space. In contrast, in coarse-grained maps, animal vectors are highly clustered according to their biological class.
- Score: 7.909848251752742
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The hippocampal-entorhinal complex plays a major role in the organization of
memory and thought. The formation of and navigation in cognitive maps of
arbitrary mental spaces via place and grid cells can serve as a representation
of memories and experiences and their relations to each other. The multi-scale
successor representation is proposed to be the mathematical principle
underlying place and grid cell computations. Here, we present a neural network,
which learns a cognitive map of a semantic space based on 32 different animal
species encoded as feature vectors. The neural network successfully learns the
similarities between different animal species, and constructs a cognitive map
of 'animal space' based on the principle of successor representations with an
accuracy of around 30% which is near to the theoretical maximum regarding the
fact that all animal species have more than one possible successor, i.e.
nearest neighbor in feature space. Furthermore, a hierarchical structure, i.e.
different scales of cognitive maps, can be modeled based on multi-scale
successor representations. We find that, in fine-grained cognitive maps, the
animal vectors are evenly distributed in feature space. In contrast, in
coarse-grained maps, animal vectors are highly clustered according to their
biological class, i.e. amphibians, mammals and insects. This could be a
possible mechanism explaining the emergence of new abstract semantic concepts.
Finally, even completely new or incomplete input can be represented by
interpolation of the representations from the cognitive map with remarkable
high accuracy of up to 95%. We conclude that the successor representation can
serve as a weighted pointer to past memories and experiences, and may therefore
be a crucial building block for future machine learning to include prior
knowledge, and to derive context knowledge from novel input.
Related papers
- Does Spatial Cognition Emerge in Frontier Models? [56.47912101304053]
We present SPACE, a benchmark that systematically evaluates spatial cognition in frontier models.
Results suggest that contemporary frontier models fall short of the spatial intelligence of animals.
arXiv Detail & Related papers (2024-10-09T01:41:49Z) - Multi-Modal Cognitive Maps based on Neural Networks trained on Successor
Representations [3.4916237834391874]
Cognitive maps are a proposed concept on how the brain efficiently organizes memories and retrieves context out of them.
We set up a multi-modal neural network using successor representations which is able to model place cell dynamics and cognitive map representations.
The network learns the similarities between novel inputs and the training database and therefore the representation of the cognitive map successfully.
arXiv Detail & Related papers (2023-12-22T12:44:15Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Conceptual Cognitive Maps Formation with Neural Successor Networks and
Word Embeddings [7.909848251752742]
We introduce a model that employs successor representations and neural networks, along with word embedding, to construct a cognitive map of three separate concepts.
The network adeptly learns two different scaled maps and situates new information in proximity to related pre-existing representations.
We suggest that our model could potentially improve current AI models by providing multi-modal context information to any input.
arXiv Detail & Related papers (2023-07-04T09:11:01Z) - Language Knowledge-Assisted Representation Learning for Skeleton-Based
Action Recognition [71.35205097460124]
How humans understand and recognize the actions of others is a complex neuroscientific problem.
LA-GCN proposes a graph convolution network using large-scale language models (LLM) knowledge assistance.
arXiv Detail & Related papers (2023-05-21T08:29:16Z) - Neural Network based Successor Representations of Space and Language [6.748976209131109]
We present a neural network based approach to learn multi-scale successor representations of structured knowledge.
In all scenarios, the neural network correctly learns and approximates the underlying structure by building successor representations.
We conclude that cognitive maps and neural network-based successor representations of structured knowledge provide a promising way to overcome some of the short comings of deep learning towards artificial general intelligence.
arXiv Detail & Related papers (2022-02-22T21:52:46Z) - How to build a cognitive map: insights from models of the hippocampal
formation [0.45880283710344055]
The concept of a cognitive map has emerged as one of the leading metaphors for these capacities.
unravelling the learning and neural representation of such a map has become a central focus of neuroscience.
arXiv Detail & Related papers (2022-02-03T16:49:37Z) - Grounding Psychological Shape Space in Convolutional Neural Networks [0.0]
We use convolutional neural networks to learn a generalizable mapping between perceptual inputs and a recently proposed psychological similarity space for the shape domain.
Our results indicate that a classification-based multi-task learning scenario yields the best results, but that its performance is relatively sensitive to the dimensionality of the similarity space.
arXiv Detail & Related papers (2021-11-16T12:21:07Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Can the brain use waves to solve planning problems? [62.997667081978825]
We present a neural network model which can solve such tasks.
The model is compatible with a broad range of empirical findings about the mammalian neocortex and hippocampus.
arXiv Detail & Related papers (2021-10-11T11:07:05Z) - Transferring Dense Pose to Proximal Animal Classes [83.84439508978126]
We show that it is possible to transfer the knowledge existing in dense pose recognition for humans, as well as in more general object detectors and segmenters, to the problem of dense pose recognition in other classes.
We do this by establishing a DensePose model for the new animal which is also geometrically aligned to humans.
We also introduce two benchmark datasets labelled in the manner of DensePose for the class chimpanzee and use them to evaluate our approach.
arXiv Detail & Related papers (2020-02-28T21:43:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.