The Tensor Brain: A Unified Theory of Perception, Memory and Semantic
Decoding
- URL: http://arxiv.org/abs/2109.13392v1
- Date: Mon, 27 Sep 2021 23:32:44 GMT
- Title: The Tensor Brain: A Unified Theory of Perception, Memory and Semantic
Decoding
- Authors: Volker Tresp, Sahand Sharifzadeh, Hang Li, Dario Konopatzki, Yunpu Ma
- Abstract summary: We present a unified computational theory of perception and memory.
In our model, perception, episodic memory, and semantic memory are realized by different functional and operational modes.
- Score: 16.37225919719441
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a unified computational theory of perception and memory. In our
model, perception, episodic memory, and semantic memory are realized by
different functional and operational modes of the oscillating interactions
between an index layer and a representation layer in a bilayer tensor network
(BTN). The memoryless semantic {representation layer} broadcasts information.
In cognitive neuroscience, it would be the "mental canvas", or the "global
workspace" and reflects the cognitive brain state. The symbolic {index layer}
represents concepts and past episodes, whose semantic embeddings are
implemented in the connection weights between both layers. In addition, we
propose a {working memory layer} as a processing center and information buffer.
Episodic and semantic memory realize memory-based reasoning, i.e., the recall
of relevant past information to enrich perception, and are personalized to an
agent's current state, as well as to an agent's unique memories. Episodic
memory stores and retrieves past observations and provides provenance and
context. Recent episodic memory enriches perception by the retrieval of
perceptual experiences, which provide the agent with a sense about the here and
now: to understand its own state, and the world's semantic state in general,
the agent needs to know what happened recently, in recent scenes, and on
recently perceived entities. Remote episodic memory retrieves relevant past
experiences, contributes to our conscious self, and, together with semantic
memory, to a large degree defines who we are as individuals.
Related papers
- How the (Tensor-) Brain uses Embeddings and Embodiment to Encode Senses and Decode Symbols [26.516135696182392]
We provide an overview of the tensor brain model, including recent developments.
The representation layer is a model for the subsymbolic global workspace from consciousness research.
The index layer contains symbols for concepts, time instances, and predicates.
arXiv Detail & Related papers (2024-09-19T15:45:38Z) - Hierarchical Working Memory and a New Magic Number [1.024113475677323]
We propose a recurrent neural network model for chunking within the framework of the synaptic theory of working memory.
Our work provides a novel conceptual and analytical framework for understanding the on-the-fly organization of information in the brain that is crucial for cognition.
arXiv Detail & Related papers (2024-08-14T16:03:47Z) - What do larger image classifiers memorise? [64.01325988398838]
We show that training examples exhibit an unexpectedly diverse set of memorisation trajectories across model sizes.
We find that knowledge distillation, an effective and popular model compression technique, tends to inhibit memorisation, while also improving generalisation.
arXiv Detail & Related papers (2023-10-09T01:52:07Z) - Semantic HELM: A Human-Readable Memory for Reinforcement Learning [9.746397419479445]
We propose a novel memory mechanism that represents past events in human language.
We train our memory mechanism on a set of partially observable environments and find that it excels on tasks that require a memory component.
Since our memory mechanism is human-readable, we can peek at an agent's memory and check whether crucial pieces of information have been stored.
arXiv Detail & Related papers (2023-06-15T17:47:31Z) - Memory-Augmented Theory of Mind Network [59.9781556714202]
Social reasoning requires the capacity of theory of mind (ToM) to contextualise and attribute mental states to others.
Recent machine learning approaches to ToM have demonstrated that we can train the observer to read the past and present behaviours of other agents.
We tackle the challenges by equipping the observer with novel neural memory mechanisms to encode, and hierarchical attention to selectively retrieve information about others.
This results in ToMMY, a theory of mind model that learns to reason while making little assumptions about the underlying mental processes.
arXiv Detail & Related papers (2023-01-17T14:48:58Z) - A Machine with Short-Term, Episodic, and Semantic Memory Systems [9.42475956340287]
Inspired by the cognitive science theory of the explicit human memory systems, we have modeled an agent with short-term, episodic, and semantic memory systems.
Our experiments indicate that an agent with human-like memory systems can outperform an agent without this memory structure in the environment.
arXiv Detail & Related papers (2022-12-05T08:34:23Z) - Pin the Memory: Learning to Generalize Semantic Segmentation [68.367763672095]
We present a novel memory-guided domain generalization method for semantic segmentation based on meta-learning framework.
Our method abstracts the conceptual knowledge of semantic classes into categorical memory which is constant beyond the domains.
arXiv Detail & Related papers (2022-04-07T17:34:01Z) - Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory [75.65949969000596]
Episodic and semantic memory are critical components of the human memory model.
We develop a new principled Bayesian memory allocation scheme that bridges the gap between episodic and semantic memory.
We demonstrate that this allocation scheme improves performance in memory conditional image generation.
arXiv Detail & Related papers (2021-02-20T18:40:40Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z) - Self-Attentive Associative Memory [69.40038844695917]
We propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory)
We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks.
arXiv Detail & Related papers (2020-02-10T03:27:48Z) - The Tensor Brain: Semantic Decoding for Perception and Memory [25.49830575143093]
We analyse perception and memory using mathematical models for knowledge graphs and tensors.
We argue that a biological realization of perception and memory imposes constraints on information processing.
In particular, we propose that explicit perception and declarative memories require a semantic decoder.
arXiv Detail & Related papers (2020-01-29T07:48:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.