Synergistic Integration of Large Language Models and Cognitive
Architectures for Robust AI: An Exploratory Analysis
- URL: http://arxiv.org/abs/2308.09830v3
- Date: Thu, 28 Sep 2023 15:10:56 GMT
- Title: Synergistic Integration of Large Language Models and Cognitive
Architectures for Robust AI: An Exploratory Analysis
- Authors: Oscar J. Romero, John Zimmerman, Aaron Steinfeld, Anthony Tomasic
- Abstract summary: This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs)
We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence.
These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems.
- Score: 12.9222727028798
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper explores the integration of two AI subdisciplines employed in the
development of artificial agents that exhibit intelligent behavior: Large
Language Models (LLMs) and Cognitive Architectures (CAs). We present three
integration approaches, each grounded in theoretical models and supported by
preliminary empirical evidence. The modular approach, which introduces four
models with varying degrees of integration, makes use of chain-of-thought
prompting, and draws inspiration from augmented LLMs, the Common Model of
Cognition, and the simulation theory of cognition. The agency approach,
motivated by the Society of Mind theory and the LIDA cognitive architecture,
proposes the formation of agent collections that interact at micro and macro
cognitive levels, driven by either LLMs or symbolic components. The
neuro-symbolic approach, which takes inspiration from the CLARION cognitive
architecture, proposes a model where bottom-up learning extracts symbolic
representations from an LLM layer and top-down guidance utilizes symbolic
representations to direct prompt engineering in the LLM layer. These approaches
aim to harness the strengths of both LLMs and CAs, while mitigating their
weaknesses, thereby advancing the development of more robust AI systems. We
discuss the tradeoffs and challenges associated with each approach.
Related papers
- Unlocking Structured Thinking in Language Models with Cognitive Prompting [0.0]
We propose cognitive prompting as a novel approach to guide problem-solving in large language models.
We evaluate the effectiveness of cognitive prompting on Meta's LLaMA models.
arXiv Detail & Related papers (2024-10-03T19:53:47Z) - From Linguistic Giants to Sensory Maestros: A Survey on Cross-Modal Reasoning with Large Language Models [56.9134620424985]
Cross-modal reasoning (CMR) is increasingly recognized as a crucial capability in the progression toward more sophisticated artificial intelligence systems.
The recent trend of deploying Large Language Models (LLMs) to tackle CMR tasks has marked a new mainstream of approaches for enhancing their effectiveness.
This survey offers a nuanced exposition of current methodologies applied in CMR using LLMs, classifying these into a detailed three-tiered taxonomy.
arXiv Detail & Related papers (2024-09-19T02:51:54Z) - Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making [51.737762570776006]
LLM-ACTR is a novel neuro-symbolic architecture that provides human-aligned and versatile decision-making.
Our framework extracts and embeds knowledge of ACT-R's internal decision-making process as latent neural representations.
Our experiments on novel Design for Manufacturing tasks show both improved task performance as well as improved grounded decision-making capability.
arXiv Detail & Related papers (2024-08-17T11:49:53Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Multi-step Inference over Unstructured Data [2.169874047093392]
High-stakes decision-making tasks in fields such as medical, legal and finance require a level of precision, comprehensiveness, and logical consistency.
We have developed a neuro-symbolic AI platform to tackle these problems.
The platform integrates fine-tuned LLMs for knowledge extraction and alignment with a robust symbolic reasoning engine.
arXiv Detail & Related papers (2024-06-26T00:00:45Z) - Detecting Any Human-Object Interaction Relationship: Universal HOI
Detector with Spatial Prompt Learning on Foundation Models [55.20626448358655]
This study explores the universal interaction recognition in an open-world setting through the use of Vision-Language (VL) foundation models and large language models (LLMs)
Our design includes an HO Prompt-guided Decoder (HOPD), facilitates the association of high-level relation representations in the foundation model with various HO pairs within the image.
For open-category interaction recognition, our method supports either of two input types: interaction phrase or interpretive sentence.
arXiv Detail & Related papers (2023-11-07T08:27:32Z) - Balancing Autonomy and Alignment: A Multi-Dimensional Taxonomy for
Autonomous LLM-powered Multi-Agent Architectures [0.0]
Large language models (LLMs) have revolutionized the field of artificial intelligence, endowing it with sophisticated language understanding and generation capabilities.
This paper proposes a comprehensive multi-dimensional taxonomy to analyze how autonomous LLM-powered multi-agent systems balance the dynamic interplay between autonomy and alignment.
arXiv Detail & Related papers (2023-10-05T16:37:29Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.