Intersymbolic AI: Interlinking Symbolic AI and Subsymbolic AI
- URL: http://arxiv.org/abs/2406.11563v3
- Date: Fri, 26 Jul 2024 09:52:15 GMT
- Title: Intersymbolic AI: Interlinking Symbolic AI and Subsymbolic AI
- Authors: André Platzer,
- Abstract summary: Intersymbolic AI combines symbolic and subsymbolic AI to increase the effectiveness of AI.
The way Intersymbolic AI combines both symbolic and subsymbolic AI is likened to the way that the combination of both conscious and subconscious thought increases the effectiveness of human thought.
- Score: 3.20902205123321
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This perspective piece calls for the study of the new field of Intersymbolic AI, by which we mean the combination of symbolic AI, whose building blocks have inherent significance/meaning, with subsymbolic AI, whose entirety creates significance/effect despite the fact that individual building blocks escape meaning. Canonical kinds of symbolic AI are logic, games and planning. Canonical kinds of subsymbolic AI are (un)supervised machine and reinforcement learning. Intersymbolic AI interlinks the worlds of symbolic AI with its compositional symbolic significance and meaning and of subsymbolic AI with its summative significance or effect to enable culminations of insights from both worlds by going between and across symbolic AI insights with subsymbolic AI techniques that are being helped by symbolic AI principles. For example, Intersymbolic AI may start with symbolic AI to understand a dynamic system, continue with subsymbolic AI to learn its control, and end with symbolic AI to safely use the outcome of the learned subsymbolic AI controller in the dynamic system. The way Intersymbolic AI combines both symbolic and subsymbolic AI to increase the effectiveness of AI compared to either kind of AI alone is likened to the way that the combination of both conscious and subconscious thought increases the effectiveness of human thought compared to either kind of thought alone. Some successful contributions to the Intersymbolic AI paradigm are surveyed here but many more are considered possible by advancing Intersymbolic AI.
Related papers
- Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - A Survey on Verification and Validation, Testing and Evaluations of
Neurosymbolic Artificial Intelligence [10.503182476649645]
Neurosymbolic artificial intelligence (AI) is an emerging branch of AI that combines the strengths of symbolic AI and sub-symbolic AI.
A major drawback of sub-symbolic AI is that it acts as a "black box", meaning that predictions are difficult to explain.
This survey explores how neurosymbolic applications can ease the V&V process.
arXiv Detail & Related papers (2024-01-06T10:28:52Z) - A Historical Interaction between Artificial Intelligence and Philosophy [0.0]
This paper reviews the historical development of AI and representative philosophical thinking from the perspective of the research paradigm.
It considers the methodology and applications of AI from a philosophical perspective and anticipates its continued advancement.
arXiv Detail & Related papers (2022-07-23T22:37:22Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable
and Advisable AI Systems [21.314210696069495]
We argue that the need for (human-understandable) symbols in human-AI interaction seems quite compelling.
In particular, humans would be interested in providing explicit (symbolic) knowledge and advice--and expect machine explanations in kind.
This alone requires AI systems to at least do their I/O in symbolic terms.
arXiv Detail & Related papers (2021-09-21T01:30:06Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Symbolic Behaviour in Artificial Intelligence [8.849576130278157]
We argue that the path towards symbolically fluent AI begins with a reinterpretation of what symbols are.
We then outline how this interpretation unifies the behavioural traits humans exhibit when they use symbols.
We suggest that AI research explore social and cultural engagement as a tool to develop the cognitive machinery necessary for symbolic behaviour to emerge.
arXiv Detail & Related papers (2021-02-05T20:07:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.