Intersymbolic AI: Interlinking Symbolic AI and Subsymbolic AI
- URL: http://arxiv.org/abs/2406.11563v2
- Date: Mon, 22 Jul 2024 05:46:01 GMT
- Title: Intersymbolic AI: Interlinking Symbolic AI and Subsymbolic AI
- Authors: André Platzer,
- Abstract summary: Intersymbolic AI combines symbolic and subsymbolic AI to increase the effectiveness of AI.
Intersymbolic AI interlinks the worlds of symbolic AI with its compositional symbolic significance and meaning.
- Score: 3.20902205123321
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This perspective piece calls for the study of the new field of Intersymbolic AI, by which we mean the combination of symbolic AI, whose building blocks have inherent significance/meaning, with subsymbolic AI, whose entirety creates significance/effect despite the fact that individual building blocks escape meaning. Canonical kinds of symbolic AI are logic, games and planning. Canonical kinds of subsymbolic AI are (un)supervised machine and reinforcement learning. Intersymbolic AI interlinks the worlds of symbolic AI with its compositional symbolic significance and meaning and of subsymbolic AI with its summative significance or effect to enable culminations of insights from both worlds by going between and across symbolic AI insights with subsymbolic AI techniques that are being helped by symbolic AI principles. For example, Intersymbolic AI may start with symbolic AI to understand a dynamic system, continue with subsymbolic AI to learn its control, and end with symbolic AI to safely use the outcome of the learned subsymbolic AI controller in the dynamic system. Intersymbolic AI combines both symbolic and subsymbolic AI to increase the effectiveness of AI compared to either kind of AI alone, in much the same way that the combination of both conscious and subconscious thought increases the effectiveness of human thought compared to either kind of thought alone. Some successful contributions to the Intersymbolic AI paradigm are surveyed here but many more are considered possible by advancing Intersymbolic AI.
Related papers
- Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [54.247747237176625]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - A Survey on Verification and Validation, Testing and Evaluations of
Neurosymbolic Artificial Intelligence [10.503182476649645]
Neurosymbolic artificial intelligence (AI) is an emerging branch of AI that combines the strengths of symbolic AI and sub-symbolic AI.
A major drawback of sub-symbolic AI is that it acts as a "black box", meaning that predictions are difficult to explain.
This survey explores how neurosymbolic applications can ease the V&V process.
arXiv Detail & Related papers (2024-01-06T10:28:52Z) - A Historical Interaction between Artificial Intelligence and Philosophy [0.0]
This paper reviews the historical development of AI and representative philosophical thinking from the perspective of the research paradigm.
It considers the methodology and applications of AI from a philosophical perspective and anticipates its continued advancement.
arXiv Detail & Related papers (2022-07-23T22:37:22Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable
and Advisable AI Systems [21.314210696069495]
We argue that the need for (human-understandable) symbols in human-AI interaction seems quite compelling.
In particular, humans would be interested in providing explicit (symbolic) knowledge and advice--and expect machine explanations in kind.
This alone requires AI systems to at least do their I/O in symbolic terms.
arXiv Detail & Related papers (2021-09-21T01:30:06Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Symbolic Behaviour in Artificial Intelligence [8.849576130278157]
We argue that the path towards symbolically fluent AI begins with a reinterpretation of what symbols are.
We then outline how this interpretation unifies the behavioural traits humans exhibit when they use symbols.
We suggest that AI research explore social and cultural engagement as a tool to develop the cognitive machinery necessary for symbolic behaviour to emerge.
arXiv Detail & Related papers (2021-02-05T20:07:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.