Can A Cognitive Architecture Fundamentally Enhance LLMs? Or Vice Versa?
- URL: http://arxiv.org/abs/2401.10444v1
- Date: Fri, 19 Jan 2024 01:14:45 GMT
- Title: Can A Cognitive Architecture Fundamentally Enhance LLMs? Or Vice Versa?
- Authors: Ron Sun
- Abstract summary: The paper argues that incorporating insights from human cognition and psychology, as embodied by a computational cognitive architecture, can help develop systems that are more capable, more reliable, and more human-like.
- Score: 0.32634122554913997
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The paper discusses what is needed to address the limitations of current
LLM-centered AI systems. The paper argues that incorporating insights from
human cognition and psychology, as embodied by a computational cognitive
architecture, can help develop systems that are more capable, more reliable,
and more human-like. It emphasizes the importance of the dual-process
architecture and the hybrid neuro-symbolic approach in addressing the
limitations of current LLMs. In the opposite direction, the paper also
highlights the need for an overhaul of computational cognitive architectures to
better reflect advances in AI and computing technology. Overall, the paper
advocates for a multidisciplinary, mutually beneficial approach towards
developing better models both for AI and for understanding the human mind.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Psychomatics -- A Multidisciplinary Framework for Understanding Artificial Minds [0.319565400223685]
This paper introduces Psychomatics, a framework bridging cognitive science, linguistics, and computer science.
It aims to better understand the high-level functioning of LLMs.
Psychomatics holds the potential to yield transformative insights into the nature of language, cognition, and intelligence.
arXiv Detail & Related papers (2024-07-23T12:53:41Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Deception Analysis with Artificial Intelligence: An Interdisciplinary Perspective [0.9790236766474198]
We build a timely and meaningful interdisciplinary perspective on deceptive AI.
We propose DAMAS -- a holistic Multi-Agent Systems framework for the socio-cognitive modelling and analysis of deception.
This paper covers the topic of modelling and explaining deception using AI approaches from the perspectives of Computer Science, Philosophy, Psychology, Ethics, and Intelligence Analysis.
arXiv Detail & Related papers (2024-06-09T10:31:26Z) - Should We Fear Large Language Models? A Structural Analysis of the Human
Reasoning System for Elucidating LLM Capabilities and Risks Through the Lens
of Heidegger's Philosophy [0.0]
This study investigates the capabilities and risks of Large Language Models (LLMs)
It uses the innovative parallels between the statistical patterns of word relationships within LLMs and Martin Heidegger's concepts of "ready-to-hand" and "present-at-hand"
Our findings reveal that while LLMs possess the capability for Direct Explicative Reasoning and Pseudo Rational Reasoning, they fall short in authentic rational reasoning and have no creative reasoning capabilities.
arXiv Detail & Related papers (2024-03-05T19:40:53Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Generative AI vs. AGI: The Cognitive Strengths and Weaknesses of Modern
LLMs [0.0]
It is argued that incremental improvement of such LLMs is not a viable approach to working toward human-level AGI.
Social and ethical matters regarding LLMs are very briefly touched from this perspective.
arXiv Detail & Related papers (2023-09-19T07:12:55Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Demanding and Designing Aligned Cognitive Architectures [0.0]
With AI systems becoming more powerful and pervasive, there is increasing debate about keeping their actions aligned with the broader goals and needs of humanity.
This multi-disciplinary and multi-stakeholder debate must resolve many issues, here we examine three of them.
The first issue is to clarify what demands stakeholders might usefully make on the designers of AI systems, useful because the technology exists to implement them.
The second issue is to move beyond an analytical framing that treats useful intelligence as being reward only.
arXiv Detail & Related papers (2021-12-19T16:49:28Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.