Neural Network based Successor Representations of Space and Language
- URL: http://arxiv.org/abs/2202.11190v1
- Date: Tue, 22 Feb 2022 21:52:46 GMT
- Title: Neural Network based Successor Representations of Space and Language
- Authors: Paul Stoewer, Christian Schlieker, Achim Schilling, Claus Metzner,
Andreas Maier and Patrick Krauss
- Abstract summary: We present a neural network based approach to learn multi-scale successor representations of structured knowledge.
In all scenarios, the neural network correctly learns and approximates the underlying structure by building successor representations.
We conclude that cognitive maps and neural network-based successor representations of structured knowledge provide a promising way to overcome some of the short comings of deep learning towards artificial general intelligence.
- Score: 6.748976209131109
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How does the mind organize thoughts? The hippocampal-entorhinal complex is
thought to support domain-general representation and processing of structural
knowledge of arbitrary state, feature and concept spaces. In particular, it
enables the formation of cognitive maps, and navigation on these maps, thereby
broadly contributing to cognition. It has been proposed that the concept of
multi-scale successor representations provides an explanation of the underlying
computations performed by place and grid cells. Here, we present a neural
network based approach to learn such representations, and its application to
different scenarios: a spatial exploration task based on supervised learning, a
spatial navigation task based on reinforcement learning, and a non-spatial task
where linguistic constructions have to be inferred by observing sample
sentences. In all scenarios, the neural network correctly learns and
approximates the underlying structure by building successor representations.
Furthermore, the resulting neural firing patterns are strikingly similar to
experimentally observed place and grid cell firing patterns. We conclude that
cognitive maps and neural network-based successor representations of structured
knowledge provide a promising way to overcome some of the short comings of deep
learning towards artificial general intelligence.
Related papers
- Identifying Sub-networks in Neural Networks via Functionally Similar Representations [41.028797971427124]
We take a step toward automating the understanding of the network by investigating the existence of distinct sub-networks.
Our approach offers meaningful insights into the behavior of neural networks with minimal human and computational cost.
arXiv Detail & Related papers (2024-10-21T20:19:00Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Multi-Modal Cognitive Maps based on Neural Networks trained on Successor
Representations [3.4916237834391874]
Cognitive maps are a proposed concept on how the brain efficiently organizes memories and retrieves context out of them.
We set up a multi-modal neural network using successor representations which is able to model place cell dynamics and cognitive map representations.
The network learns the similarities between novel inputs and the training database and therefore the representation of the cognitive map successfully.
arXiv Detail & Related papers (2023-12-22T12:44:15Z) - Finding Concept Representations in Neural Networks with Self-Organizing
Maps [2.817412580574242]
We show how self-organizing maps can be used to inspect how activation of layers of neural networks correspond to neural representations of abstract concepts.
We show that, among the measures tested, the relative entropy of the activation map for a concept is a suitable candidate and can be used as part of a methodology to identify and locate the neural representation of a concept.
arXiv Detail & Related papers (2023-12-10T12:10:34Z) - Conceptual Cognitive Maps Formation with Neural Successor Networks and
Word Embeddings [7.909848251752742]
We introduce a model that employs successor representations and neural networks, along with word embedding, to construct a cognitive map of three separate concepts.
The network adeptly learns two different scaled maps and situates new information in proximity to related pre-existing representations.
We suggest that our model could potentially improve current AI models by providing multi-modal context information to any input.
arXiv Detail & Related papers (2023-07-04T09:11:01Z) - Multi-Object Navigation with dynamically learned neural implicit
representations [10.182418917501064]
We propose to structure neural networks with two neural implicit representations, which are learned dynamically during each episode.
We evaluate the agent on Multi-Object Navigation and show the high impact of using neural implicit representations as a memory source.
arXiv Detail & Related papers (2022-10-11T04:06:34Z) - Learning with Capsules: A Survey [73.31150426300198]
Capsule networks were proposed as an alternative approach to Convolutional Neural Networks (CNNs) for learning object-centric representations.
Unlike CNNs, capsule networks are designed to explicitly model part-whole hierarchical relationships.
arXiv Detail & Related papers (2022-06-06T15:05:36Z) - Low-Dimensional Structure in the Space of Language Representations is
Reflected in Brain Responses [62.197912623223964]
We show a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings.
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
This suggests that the embedding captures some part of the brain's natural language representation structure.
arXiv Detail & Related papers (2021-06-09T22:59:12Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.