How the (Tensor-) Brain uses Embeddings and Embodiment to Encode Senses and Decode Symbols
- URL: http://arxiv.org/abs/2409.12846v1
- Date: Thu, 19 Sep 2024 15:45:38 GMT
- Title: How the (Tensor-) Brain uses Embeddings and Embodiment to Encode Senses and Decode Symbols
- Authors: Volker Tresp, Hang Li,
- Abstract summary: We provide an overview of the tensor brain model, including recent developments.
The representation layer is a model for the subsymbolic global workspace from consciousness research.
The index layer contains symbols for concepts, time instances, and predicates.
- Score: 26.516135696182392
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The tensor brain has been introduced as a computational model for perception and memory. We provide an overview of the tensor brain model, including recent developments. The tensor brain has two major layers: the representation layer and the index layer. The representation layer is a model for the subsymbolic global workspace from consciousness research. The state of the representation layer is the cognitive brain state. The index layer contains symbols for concepts, time instances, and predicates. In a bottom-up operation, the cognitive brain state is encoded by the index layer as symbolic labels. In a top-down operation, symbols are decoded and written to the representation layer. This feeds to earlier processing layers as embodiment. The top-down operation became the basis for semantic memory. The embedding vector of a concept forms the connection weights between its index and the representation layer. The embedding is the signature or ``DNA'' of a concept, which is decoded by the brain when its index is activated. It integrates all that is known about a concept from different experiences, modalities, and symbolic decodings. Although being computational, it has been suggested that the tensor brain might be related to the actual operation of the brain. The sequential nature of symbol generation might have been a prerequisite to the generation of natural language. We describe an attention mechanism and discuss multitasking by multiplexing. We emphasize the inherent multimodality of the tensor brain. Finally, we discuss embedded and symbolic reasoning.
Related papers
- Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play
Multi-Character Belief Tracker [72.09076317574238]
ToM is a plug-and-play approach to investigate the belief states of characters in reading comprehension.
We show that ToM enhances off-the-shelf neural network theory mind in a zero-order setting while showing robust out-of-distribution performance compared to supervised baselines.
arXiv Detail & Related papers (2023-06-01T17:24:35Z) - The Roles of Symbols in Neural-based AI: They are Not What You Think! [25.450989579215708]
We present a novel neuro-symbolic hypothesis and a plausible architecture for intelligent agents.
Our hypothesis and associated architecture imply that symbols will remain critical to the future of intelligent systems.
arXiv Detail & Related papers (2023-04-26T15:33:41Z) - Competitive learning to generate sparse representations for associative
memory [0.0]
We propose a biologically plausible network that encodes images into codes that are suitable for associative memory.
It is organized into groups of neurons that specialize on local receptive fields, and learn through a competitive scheme.
arXiv Detail & Related papers (2023-01-05T17:57:52Z) - Toward a realistic model of speech processing in the brain with
self-supervised learning [67.7130239674153]
Self-supervised algorithms trained on the raw waveform constitute a promising candidate.
We show that Wav2Vec 2.0 learns brain-like representations with as little as 600 hours of unlabelled speech.
arXiv Detail & Related papers (2022-06-03T17:01:46Z) - HINT: Hierarchical Neuron Concept Explainer [35.07575535848492]
We study hierarchical concepts inspired by the hierarchical cognition process of human beings.
We propose HIerarchical Neuron concepT explainer (HINT) to effectively build bidirectional associations between neurons and hierarchical concepts.
HINT enables us to systematically and quantitatively study whether and how the implicit hierarchical relationships of concepts are embedded into neurons.
arXiv Detail & Related papers (2022-03-27T03:25:36Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - The Tensor Brain: A Unified Theory of Perception, Memory and Semantic
Decoding [16.37225919719441]
We present a unified computational theory of perception and memory.
In our model, perception, episodic memory, and semantic memory are realized by different functional and operational modes.
arXiv Detail & Related papers (2021-09-27T23:32:44Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z) - The Tensor Brain: Semantic Decoding for Perception and Memory [25.49830575143093]
We analyse perception and memory using mathematical models for knowledge graphs and tensors.
We argue that a biological realization of perception and memory imposes constraints on information processing.
In particular, we propose that explicit perception and declarative memories require a semantic decoder.
arXiv Detail & Related papers (2020-01-29T07:48:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.