The Tensor Brain: Semantic Decoding for Perception and Memory
- URL: http://arxiv.org/abs/2001.11027v3
- Date: Mon, 10 Feb 2020 08:41:03 GMT
- Title: The Tensor Brain: Semantic Decoding for Perception and Memory
- Authors: Volker Tresp and Sahand Sharifzadeh and Dario Konopatzki and Yunpu Ma
- Abstract summary: We analyse perception and memory using mathematical models for knowledge graphs and tensors.
We argue that a biological realization of perception and memory imposes constraints on information processing.
In particular, we propose that explicit perception and declarative memories require a semantic decoder.
- Score: 25.49830575143093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We analyse perception and memory, using mathematical models for knowledge
graphs and tensors, to gain insights into the corresponding functionalities of
the human mind. Our discussion is based on the concept of propositional
sentences consisting of \textit{subject-predicate-object} (SPO) triples for
expressing elementary facts. SPO sentences are the basis for most natural
languages but might also be important for explicit perception and declarative
memories, as well as intra-brain communication and the ability to argue and
reason. A set of SPO sentences can be described as a knowledge graph, which can
be transformed into an adjacency tensor. We introduce tensor models, where
concepts have dual representations as indices and associated embeddings, two
constructs we believe are essential for the understanding of implicit and
explicit perception and memory in the brain. We argue that a biological
realization of perception and memory imposes constraints on information
processing. In particular, we propose that explicit perception and declarative
memories require a semantic decoder, which, in a simple realization, is based
on four layers: First, a sensory memory layer, as a buffer for sensory input,
second, an index layer representing concepts, third, a memoryless
representation layer for the broadcasting of information ---the "blackboard",
or the "canvas" of the brain--- and fourth, a working memory layer as a
processing center and data buffer. We discuss the operations of the four layers
and relate them to the global workspace theory. In a Bayesian brain
interpretation, semantic memory defines the prior for observable triple
statements. We propose that ---in evolution and during development--- semantic
memory, episodic memory, and natural language evolved as emergent properties in
agents' process to gain a deeper understanding of sensory information.
Related papers
- How the (Tensor-) Brain uses Embeddings and Embodiment to Encode Senses and Decode Symbols [26.516135696182392]
We provide an overview of the tensor brain model, including recent developments.
The representation layer is a model for the subsymbolic global workspace from consciousness research.
The index layer contains symbols for concepts, time instances, and predicates.
arXiv Detail & Related papers (2024-09-19T15:45:38Z) - Hierarchical Working Memory and a New Magic Number [1.024113475677323]
We propose a recurrent neural network model for chunking within the framework of the synaptic theory of working memory.
Our work provides a novel conceptual and analytical framework for understanding the on-the-fly organization of information in the brain that is crucial for cognition.
arXiv Detail & Related papers (2024-08-14T16:03:47Z) - Fine-tuned vs. Prompt-tuned Supervised Representations: Which Better
Account for Brain Language Representations? [30.495681024162835]
We compare prompt-tuned and fine-tuned representations in neural decoding.
We find that a more brain-consistent tuning method yields representations that better correlate with brain data.
This indicates that our brain encodes more fine-grained concept information than shallow syntactic information.
arXiv Detail & Related papers (2023-10-03T07:34:30Z) - Semantic HELM: A Human-Readable Memory for Reinforcement Learning [9.746397419479445]
We propose a novel memory mechanism that represents past events in human language.
We train our memory mechanism on a set of partially observable environments and find that it excels on tasks that require a memory component.
Since our memory mechanism is human-readable, we can peek at an agent's memory and check whether crucial pieces of information have been stored.
arXiv Detail & Related papers (2023-06-15T17:47:31Z) - Retentive or Forgetful? Diving into the Knowledge Memorizing Mechanism
of Language Models [49.39276272693035]
Large-scale pre-trained language models have shown remarkable memorizing ability.
Vanilla neural networks without pre-training have been long observed suffering from the catastrophic forgetting problem.
We find that 1) Vanilla language models are forgetful; 2) Pre-training leads to retentive language models; 3) Knowledge relevance and diversification significantly influence the memory formation.
arXiv Detail & Related papers (2023-05-16T03:50:38Z) - Toward a realistic model of speech processing in the brain with
self-supervised learning [67.7130239674153]
Self-supervised algorithms trained on the raw waveform constitute a promising candidate.
We show that Wav2Vec 2.0 learns brain-like representations with as little as 600 hours of unlabelled speech.
arXiv Detail & Related papers (2022-06-03T17:01:46Z) - The Tensor Brain: A Unified Theory of Perception, Memory and Semantic
Decoding [16.37225919719441]
We present a unified computational theory of perception and memory.
In our model, perception, episodic memory, and semantic memory are realized by different functional and operational modes.
arXiv Detail & Related papers (2021-09-27T23:32:44Z) - Low-Dimensional Structure in the Space of Language Representations is
Reflected in Brain Responses [62.197912623223964]
We show a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings.
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
This suggests that the embedding captures some part of the brain's natural language representation structure.
arXiv Detail & Related papers (2021-06-09T22:59:12Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z) - Self-Attentive Associative Memory [69.40038844695917]
We propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory)
We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks.
arXiv Detail & Related papers (2020-02-10T03:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.