The Roles of Symbols in Neural-based AI: They are Not What You Think!
- URL: http://arxiv.org/abs/2304.13626v1
- Date: Wed, 26 Apr 2023 15:33:41 GMT
- Title: The Roles of Symbols in Neural-based AI: They are Not What You Think!
- Authors: Daniel L. Silver and Tom M. Mitchell
- Abstract summary: We present a novel neuro-symbolic hypothesis and a plausible architecture for intelligent agents.
Our hypothesis and associated architecture imply that symbols will remain critical to the future of intelligent systems.
- Score: 25.450989579215708
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose that symbols are first and foremost external communication tools
used between intelligent agents that allow knowledge to be transferred in a
more efficient and effective manner than having to experience the world
directly. But, they are also used internally within an agent through a form of
self-communication to help formulate, describe and justify subsymbolic patterns
of neural activity that truly implement thinking. Symbols, and our languages
that make use of them, not only allow us to explain our thinking to others and
ourselves, but also provide beneficial constraints (inductive bias) on learning
about the world. In this paper we present relevant insights from neuroscience
and cognitive science, about how the human brain represents symbols and the
concepts they refer to, and how today's artificial neural networks can do the
same. We then present a novel neuro-symbolic hypothesis and a plausible
architecture for intelligent agents that combines subsymbolic representations
for symbols and concepts for learning and reasoning. Our hypothesis and
associated architecture imply that symbols will remain critical to the future
of intelligent systems NOT because they are the fundamental building blocks of
thought, but because they are characterizations of subsymbolic processes that
constitute thought.
Related papers
- Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Neurosymbolic AI - Why, What, and How [9.551858963199987]
Humans interact with the environment using a combination of perception and cognition.
On the other hand, machine cognition encompasses more complex computations.
This article introduces the rapidly emerging paradigm of Neurosymbolic AI.
arXiv Detail & Related papers (2023-05-01T13:27:22Z) - Emergence of Symbols in Neural Networks for Semantic Understanding and
Communication [8.156761369660096]
We propose a solution to endow neural networks with the ability to create symbols, understand semantics, and achieve communication.
SEA-net generates symbols that dynamically configure the network to perform specific tasks.
These symbols capture compositional semantic information that allows the system to acquire new functions purely by symbolic manipulation or communication.
arXiv Detail & Related papers (2023-04-13T10:13:00Z) - Brain-inspired Graph Spiking Neural Networks for Commonsense Knowledge
Representation and Reasoning [11.048601659933249]
How neural networks in the human brain represent commonsense knowledge is an important research topic in neuroscience, cognitive science, psychology, and artificial intelligence.
This work investigates how population encoding and spiking timing-dependent plasticity (STDP) mechanisms can be integrated into the learning of spiking neural networks.
The neuron populations of different communities together constitute the entire commonsense knowledge graph, forming a giant graph spiking neural network.
arXiv Detail & Related papers (2022-07-11T05:22:38Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Neurosymbolic AI: The 3rd Wave [1.14219428942199]
Concerns about trust, safety, interpretability and accountability of AI were raised by influential thinkers.
Many have identified the need for well-founded knowledge representation and reasoning to be integrated with deep learning.
Neural-symbolic computing has been an active area of research seeking to bring together robust learning in neural networks with reasoning and explainability.
arXiv Detail & Related papers (2020-12-10T18:31:38Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.