Generative AI vs. AGI: The Cognitive Strengths and Weaknesses of Modern
LLMs
- URL: http://arxiv.org/abs/2309.10371v1
- Date: Tue, 19 Sep 2023 07:12:55 GMT
- Title: Generative AI vs. AGI: The Cognitive Strengths and Weaknesses of Modern
LLMs
- Authors: Ben Goertzel
- Abstract summary: It is argued that incremental improvement of such LLMs is not a viable approach to working toward human-level AGI.
Social and ethical matters regarding LLMs are very briefly touched from this perspective.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A moderately detailed consideration of interactive LLMs as cognitive systems
is given, focusing on LLMs circa mid-2023 such as ChatGPT, GPT-4, Bard, Llama,
etc.. Cognitive strengths of these systems are reviewed, and then careful
attention is paid to the substantial differences between the sort of cognitive
system these LLMs are, and the sort of cognitive systems human beings are. It
is found that many of the practical weaknesses of these AI systems can be tied
specifically to lacks in the basic cognitive architectures according to which
these systems are built. It is argued that incremental improvement of such LLMs
is not a viable approach to working toward human-level AGI, in practical terms
given realizable amounts of compute resources. This does not imply there is
nothing to learn about human-level AGI from studying and experimenting with
LLMs, nor that LLMs cannot form significant parts of human-level AGI
architectures that also incorporate other ideas. Social and ethical matters
regarding LLMs are very briefly touched from this perspective, which implies
that while care should be taken regarding misinformation and other issues, and
economic upheavals will need their own social remedies based on their
unpredictable course as with any powerfully impactful technology, overall the
sort of policy needed as regards modern LLMs is quite different than would be
the case if a more credible approximation to human-level AGI were at hand.
Related papers
- Large Language Models and Cognitive Science: A Comprehensive Review of Similarities, Differences, and Challenges [11.19619695546899]
This comprehensive review explores the intersection of Large Language Models (LLMs) and cognitive science.
We analyze methods for evaluating LLMs cognitive abilities and discuss their potential as cognitive models.
We assess cognitive biases and limitations of LLMs, along with proposed methods for improving their performance.
arXiv Detail & Related papers (2024-09-04T02:30:12Z) - Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making [51.737762570776006]
LLM-ACTR is a novel neuro-symbolic architecture that provides human-aligned and versatile decision-making.
Our framework extracts and embeds knowledge of ACT-R's internal decision-making process as latent neural representations.
Our experiments on novel Design for Manufacturing tasks show both improved task performance as well as improved grounded decision-making capability.
arXiv Detail & Related papers (2024-08-17T11:49:53Z) - Metacognitive Myopia in Large Language Models [0.0]
Large Language Models (LLMs) exhibit potentially harmful biases that reinforce culturally inherent stereotypes, cloud moral judgments, or amplify positive evaluations of majority groups.
We propose metacognitive myopia as a cognitive-ecological framework that can account for a conglomerate of established and emerging LLM biases.
Our theoretical framework posits that a lack of the two components of metacognition, monitoring and control, causes five symptoms of metacognitive myopia in LLMs.
arXiv Detail & Related papers (2024-08-10T14:43:57Z) - The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - FAC$^2$E: Better Understanding Large Language Model Capabilities by Dissociating Language and Cognition [56.76951887823882]
Large language models (LLMs) are primarily evaluated by overall performance on various text understanding and generation tasks.
We present FAC$2$E, a framework for Fine-grAined and Cognition-grounded LLMs' Capability Evaluation.
arXiv Detail & Related papers (2024-02-29T21:05:37Z) - Rethinking Machine Unlearning for Large Language Models [85.92660644100582]
We explore machine unlearning in the domain of large language models (LLMs)
This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities.
arXiv Detail & Related papers (2024-02-13T20:51:58Z) - Can A Cognitive Architecture Fundamentally Enhance LLMs? Or Vice Versa? [0.32634122554913997]
The paper argues that incorporating insights from human cognition and psychology, as embodied by a computational cognitive architecture, can help develop systems that are more capable, more reliable, and more human-like.
arXiv Detail & Related papers (2024-01-19T01:14:45Z) - Ethical Artificial Intelligence Principles and Guidelines for the
Governance and Utilization of Highly Advanced Large Language Models [20.26440212703017]
There has been an increase in development and usage of large language models (LLMs)
This paper addresses what ethical AI principles and guidelines can be used to address highly advanced LLMs.
arXiv Detail & Related papers (2023-12-19T06:28:43Z) - Deception Abilities Emerged in Large Language Models [0.0]
Large language models (LLMs) are currently at the forefront of intertwining artificial intelligence (AI) systems with human communication and everyday life.
This study reveals that such strategies emerged in state-of-the-art LLMs, such as GPT-4, but were non-existent in earlier LLMs.
We conduct a series of experiments showing that state-of-the-art LLMs are able to understand and induce false beliefs in other agents.
arXiv Detail & Related papers (2023-07-31T09:27:01Z) - How Can Recommender Systems Benefit from Large Language Models: A Survey [82.06729592294322]
Large language models (LLM) have shown impressive general intelligence and human-like capabilities.
We conduct a comprehensive survey on this research direction from the perspective of the whole pipeline in real-world recommender systems.
arXiv Detail & Related papers (2023-06-09T11:31:50Z) - AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap [46.98582021477066]
The rise of powerful large language models (LLMs) brings about tremendous opportunities for innovation but also looming risks for individuals and society at large.
We have reached a pivotal moment for ensuring that LLMs and LLM-infused applications are developed and deployed responsibly.
It is paramount to pursue new approaches to provide transparency for LLMs.
arXiv Detail & Related papers (2023-06-02T22:51:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.