Emergence of Symbols in Neural Networks for Semantic Understanding and
Communication
- URL: http://arxiv.org/abs/2304.06377v3
- Date: Sun, 25 Jun 2023 05:53:06 GMT
- Title: Emergence of Symbols in Neural Networks for Semantic Understanding and
Communication
- Authors: Yang Chen, Liangxuan Guo, Shan Yu
- Abstract summary: We propose a solution to endow neural networks with the ability to create symbols, understand semantics, and achieve communication.
SEA-net generates symbols that dynamically configure the network to perform specific tasks.
These symbols capture compositional semantic information that allows the system to acquire new functions purely by symbolic manipulation or communication.
- Score: 8.156761369660096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The capacity to generate meaningful symbols and effectively employ them for
advanced cognitive processes, such as communication, reasoning, and planning,
constitutes a fundamental and distinctive aspect of human intelligence.
Existing deep neural networks still notably lag human capabilities in terms of
generating symbols for higher cognitive functions. Here, we propose a solution
(symbol emergence artificial network (SEA-net)) to endow neural networks with
the ability to create symbols, understand semantics, and achieve communication.
SEA-net generates symbols that dynamically configure the network to perform
specific tasks. These symbols capture compositional semantic information that
allows the system to acquire new functions purely by symbolic manipulation or
communication. In addition, these self-generated symbols exhibit an intrinsic
structure resembling that of natural language, suggesting a common framework
underlying the generation and understanding of symbols in both human brains and
artificial neural networks. We believe that the proposed framework will be
instrumental in producing more capable systems that can synergize the strengths
of connectionist and symbolic approaches for artificial intelligence (AI).
Related papers
- Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Exploring knowledge graph-based neural-symbolic system from application perspective [0.0]
achieving human-like reasoning and interpretability in AI systems remains a substantial challenge.
The Neural-Symbolic paradigm, which integrates neural networks with symbolic systems, presents a promising pathway toward more interpretable AI.
This paper explores recent advancements in neural-symbolic integration based on Knowledge Graphs.
arXiv Detail & Related papers (2024-05-06T14:40:50Z) - Aligning Knowledge Graphs Provided by Humans and Generated from Neural Networks in Specific Tasks [5.791414814676125]
This paper develops an innovative method that enables neural networks to generate and utilize knowledge graphs.
Our approach eschews traditional dependencies on or word embedding models, mining concepts from neural networks and directly aligning them with human knowledge.
Experiments show that our method consistently captures network-generated concepts that align closely with human knowledge and can even uncover new, useful concepts not previously identified by humans.
arXiv Detail & Related papers (2024-04-23T20:33:17Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - The Roles of Symbols in Neural-based AI: They are Not What You Think! [25.450989579215708]
We present a novel neuro-symbolic hypothesis and a plausible architecture for intelligent agents.
Our hypothesis and associated architecture imply that symbols will remain critical to the future of intelligent systems.
arXiv Detail & Related papers (2023-04-26T15:33:41Z) - Deep Symbolic Learning: Discovering Symbols and Rules from Perceptions [69.40242990198]
Neuro-Symbolic (NeSy) integration combines symbolic reasoning with Neural Networks (NNs) for tasks requiring perception and reasoning.
Most NeSy systems rely on continuous relaxation of logical knowledge, and no discrete decisions are made within the model pipeline.
We propose a NeSy system that learns NeSy-functions, i.e., the composition of a (set of) perception functions which map continuous data to discrete symbols, and a symbolic function over the set of symbols.
arXiv Detail & Related papers (2022-08-24T14:06:55Z) - Neuro-Symbolic Artificial Intelligence (AI) for Intent based Semantic
Communication [85.06664206117088]
6G networks must consider semantics and effectiveness (at end-user) of the data transmission.
NeSy AI is proposed as a pillar for learning causal structure behind the observed data.
GFlowNet is leveraged for the first time in a wireless system to learn the probabilistic structure which generates the data.
arXiv Detail & Related papers (2022-05-22T07:11:57Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - A Memory-Augmented Neural Network Model of Abstract Rule Learning [2.3562267625320352]
We focus on neural networks' capacity for arbitrary role-filler binding.
We introduce the Emergent Symbol Binding Network (ESBN), a recurrent neural network model that learns to use an external memory as a binding mechanism.
This mechanism enables symbol-like variable representations to emerge through the ESBN's training process without the need for explicit symbol-processing machinery.
arXiv Detail & Related papers (2020-12-13T22:40:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.