Getting from Generative AI to Trustworthy AI: What LLMs might learn from
Cyc
- URL: http://arxiv.org/abs/2308.04445v1
- Date: Mon, 31 Jul 2023 16:29:28 GMT
- Title: Getting from Generative AI to Trustworthy AI: What LLMs might learn from
Cyc
- Authors: Doug Lenat, Gary Marcus
- Abstract summary: Generative AI, the most popular current approach to AI, consists of large language models (LLMs) that are trained to produce outputs that are plausible, but not necessarily correct.
We discuss an alternative approach to AI which could theoretically address many of the limitations associated with current approaches.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative AI, the most popular current approach to AI, consists of large
language models (LLMs) that are trained to produce outputs that are plausible,
but not necessarily correct. Although their abilities are often uncanny, they
are lacking in aspects of reasoning, leading LLMs to be less than completely
trustworthy. Furthermore, their results tend to be both unpredictable and
uninterpretable.
We lay out 16 desiderata for future AI, and discuss an alternative approach
to AI which could theoretically address many of the limitations associated with
current approaches: AI educated with curated pieces of explicit knowledge and
rules of thumb, enabling an inference engine to automatically deduce the
logical entailments of all that knowledge. Even long arguments produced this
way can be both trustworthy and interpretable, since the full step-by-step line
of reasoning is always available, and for each step the provenance of the
knowledge used can be documented and audited. There is however a catch: if the
logical language is expressive enough to fully represent the meaning of
anything we can say in English, then the inference engine runs much too slowly.
That's why symbolic AI systems typically settle for some fast but much less
expressive logic, such as knowledge graphs. We describe how one AI system, Cyc,
has developed ways to overcome that tradeoff and is able to reason in higher
order logic in real time.
We suggest that any trustworthy general AI will need to hybridize the
approaches, the LLM approach and more formal approach, and lay out a path to
realizing that dream.
Related papers
- On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Cognition is All You Need -- The Next Layer of AI Above Large Language
Models [0.0]
We present Cognitive AI, a framework for neurosymbolic cognition outside of large language models.
We propose that Cognitive AI is a necessary precursor for the evolution of the forms of AI, such as AGI, and specifically claim that AGI cannot be achieved by probabilistic approaches on their own.
We conclude with a discussion of the implications for large language models, adoption cycles in AI, and commercial Cognitive AI development.
arXiv Detail & Related papers (2024-03-04T16:11:57Z) - Bootstrapping Developmental AIs: From Simple Competences to Intelligent
Human-Compatible AIs [0.0]
The mainstream AIs approaches are the generative and deep learning approaches with large language models (LLMs) and the manually constructed symbolic approach.
This position paper lays out the prospects, gaps, and challenges for extending the practice of developmental AIs to create resilient, intelligent, and human-compatible AIs.
arXiv Detail & Related papers (2023-08-08T21:14:21Z) - Taming AI Bots: Controllability of Neural States in Large Language
Models [81.1573516550699]
We first introduce a formal definition of meaning'' that is amenable to analysis.
We then characterize meaningful data'' on which large language models (LLMs) are ostensibly trained.
We show that, when restricted to the space of meanings, an AI bot is controllable.
arXiv Detail & Related papers (2023-05-29T03:58:33Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z) - Understanding Natural Language Understanding Systems. A Critical
Analysis [91.81211519327161]
The development of machines that guillemotlefttalk like usguillemotright, also known as Natural Language Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI)
But never has the trust that we can build guillemotlefttalking machinesguillemotright been stronger than the one engendered by the last generation of NLU systems.
Are we at the dawn of a new era, in which the Grail is finally closer to us?
arXiv Detail & Related papers (2023-03-01T08:32:55Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Explainable AI without Interpretable Model [0.0]
It has become more important than ever that AI systems would be able to explain the reasoning behind their results to end-users.
Most Explainable AI (XAI) methods are based on extracting an interpretable model that can be used for producing explanations.
The notions of Contextual Importance and Utility (CIU) presented in this paper make it possible to produce human-like explanations of black-box outcomes directly.
arXiv Detail & Related papers (2020-09-29T13:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.