Conceptual Cognitive Maps Formation with Neural Successor Networks and
Word Embeddings
- URL: http://arxiv.org/abs/2307.01577v1
- Date: Tue, 4 Jul 2023 09:11:01 GMT
- Title: Conceptual Cognitive Maps Formation with Neural Successor Networks and
Word Embeddings
- Authors: Paul Stoewer, Achim Schilling, Andreas Maier and Patrick Krauss
- Abstract summary: We introduce a model that employs successor representations and neural networks, along with word embedding, to construct a cognitive map of three separate concepts.
The network adeptly learns two different scaled maps and situates new information in proximity to related pre-existing representations.
We suggest that our model could potentially improve current AI models by providing multi-modal context information to any input.
- Score: 7.909848251752742
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The human brain possesses the extraordinary capability to contextualize the
information it receives from our environment. The entorhinal-hippocampal plays
a critical role in this function, as it is deeply engaged in memory processing
and constructing cognitive maps using place and grid cells. Comprehending and
leveraging this ability could significantly augment the field of artificial
intelligence. The multi-scale successor representation serves as a good model
for the functionality of place and grid cells and has already shown promise in
this role. Here, we introduce a model that employs successor representations
and neural networks, along with word embedding vectors, to construct a
cognitive map of three separate concepts. The network adeptly learns two
different scaled maps and situates new information in proximity to related
pre-existing representations. The dispersion of information across the
cognitive map varies according to its scale - either being heavily
concentrated, resulting in the formation of the three concepts, or spread
evenly throughout the map. We suggest that our model could potentially improve
current AI models by providing multi-modal context information to any input,
based on a similarity metric for the input and pre-existing knowledge
representations.
Related papers
- Multi-Modal Cognitive Maps based on Neural Networks trained on Successor
Representations [3.4916237834391874]
Cognitive maps are a proposed concept on how the brain efficiently organizes memories and retrieves context out of them.
We set up a multi-modal neural network using successor representations which is able to model place cell dynamics and cognitive map representations.
The network learns the similarities between novel inputs and the training database and therefore the representation of the cognitive map successfully.
arXiv Detail & Related papers (2023-12-22T12:44:15Z) - Finding Concept Representations in Neural Networks with Self-Organizing
Maps [2.817412580574242]
We show how self-organizing maps can be used to inspect how activation of layers of neural networks correspond to neural representations of abstract concepts.
We show that, among the measures tested, the relative entropy of the activation map for a concept is a suitable candidate and can be used as part of a methodology to identify and locate the neural representation of a concept.
arXiv Detail & Related papers (2023-12-10T12:10:34Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Neural Network based Formation of Cognitive Maps of Semantic Spaces and
the Emergence of Abstract Concepts [7.909848251752742]
We present a neural network, which learns a cognitive map of a semantic space based on 32 different animal species encoded as feature vectors.
The network constructs a cognitive map of 'animal space' based on the principle of successor representations with an accuracy of around 30%.
We find that, in fine-grained cognitive maps, the animal vectors are evenly distributed in feature space. In contrast, in coarse-grained maps, animal vectors are highly clustered according to their biological class.
arXiv Detail & Related papers (2022-10-28T11:16:33Z) - Measures of Information Reflect Memorization Patterns [53.71420125627608]
We show that the diversity in the activation patterns of different neurons is reflective of model generalization and memorization.
Importantly, we discover that information organization points to the two forms of memorization, even for neural activations computed on unlabelled in-distribution examples.
arXiv Detail & Related papers (2022-10-17T20:15:24Z) - Multi-Object Navigation with dynamically learned neural implicit
representations [10.182418917501064]
We propose to structure neural networks with two neural implicit representations, which are learned dynamically during each episode.
We evaluate the agent on Multi-Object Navigation and show the high impact of using neural implicit representations as a memory source.
arXiv Detail & Related papers (2022-10-11T04:06:34Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - Neural Network based Successor Representations of Space and Language [6.748976209131109]
We present a neural network based approach to learn multi-scale successor representations of structured knowledge.
In all scenarios, the neural network correctly learns and approximates the underlying structure by building successor representations.
We conclude that cognitive maps and neural network-based successor representations of structured knowledge provide a promising way to overcome some of the short comings of deep learning towards artificial general intelligence.
arXiv Detail & Related papers (2022-02-22T21:52:46Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.