Applying Cognitive Design Patterns to General LLM Agents
- URL: http://arxiv.org/abs/2505.07087v2
- Date: Fri, 13 Jun 2025 21:53:31 GMT
- Title: Applying Cognitive Design Patterns to General LLM Agents
- Authors: Robert E. Wray, James R. Kirk, John E. Laird,
- Abstract summary: This paper outlines a few recurring cognitive design patterns that have appeared in various pre-transformer AI architectures.<n>We then explore how these patterns are evident in systems using large language models (LLMs)<n>Examining and applying these recurring patterns enables predictions of gaps or deficiencies in today's Agentic LLM Systems.
- Score: 4.055489363682198
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: One goal of AI (and AGI) is to identify and understand specific mechanisms and representations sufficient for general intelligence. Often, this work manifests in research focused on architectures and many cognitive architectures have been explored in AI/AGI. However, different research groups and even different research traditions have somewhat independently identified similar/common patterns of processes and representations or "cognitive design patterns" that are manifest in existing architectures. Today, AI systems exploiting large language models (LLMs) offer a relatively new combination of mechanisms and representations available for exploring the possibilities of general intelligence. This paper outlines a few recurring cognitive design patterns that have appeared in various pre-transformer AI architectures. We then explore how these patterns are evident in systems using LLMs, especially for reasoning and interactive ("agentic") use cases. Examining and applying these recurring patterns enables predictions of gaps or deficiencies in today's Agentic LLM Systems and identification of subjects of future research towards general intelligence using generative foundation models.
Related papers
- Large Language Model Agent: A Survey on Methodology, Applications and Challenges [88.3032929492409]
Large Language Model (LLM) agents, with goal-driven behaviors and dynamic adaptation capabilities, potentially represent a critical pathway toward artificial general intelligence.<n>This survey systematically deconstructs LLM agent systems through a methodology-centered taxonomy.<n>Our work provides a unified architectural perspective, examining how agents are constructed, how they collaborate, and how they evolve over time.
arXiv Detail & Related papers (2025-03-27T12:50:17Z) - A Survey of Model Architectures in Information Retrieval [64.75808744228067]
We focus on two key aspects: backbone models for feature extraction and end-to-end system architectures for relevance estimation.<n>We trace the development from traditional term-based methods to modern neural approaches, particularly highlighting the impact of transformer-based models and subsequent large language models (LLMs)<n>We conclude by discussing emerging challenges and future directions, including architectural optimizations for performance and scalability, handling of multimodal, multilingual data, and adaptation to novel application domains beyond traditional search paradigms.
arXiv Detail & Related papers (2025-02-20T18:42:58Z) - Explainability for Vision Foundation Models: A Survey [3.570403495760109]
Foundation models occupy an ambiguous position in the explainability domain.<n>Foundation models are characterized by their extensive generalization capabilities and emergent uses.<n>We discuss the challenges faced by current research in integrating XAI within foundation models.
arXiv Detail & Related papers (2025-01-21T15:18:55Z) - Retrieval-Enhanced Machine Learning: Synthesis and Opportunities [60.34182805429511]
Retrieval-enhancement can be extended to a broader spectrum of machine learning (ML)
This work introduces a formal framework of this paradigm, Retrieval-Enhanced Machine Learning (REML), by synthesizing the literature in various domains in ML with consistent notations which is missing from the current literature.
The goal of this work is to equip researchers across various disciplines with a comprehensive, formally structured framework of retrieval-enhanced models, thereby fostering interdisciplinary future research.
arXiv Detail & Related papers (2024-07-17T20:01:21Z) - LVLM-Interpret: An Interpretability Tool for Large Vision-Language Models [50.259006481656094]
We present a novel interactive application aimed towards understanding the internal mechanisms of large vision-language models.
Our interface is designed to enhance the interpretability of the image patches, which are instrumental in generating an answer.
We present a case study of how our application can aid in understanding failure mechanisms in a popular large multi-modal model: LLaVA.
arXiv Detail & Related papers (2024-04-03T23:57:34Z) - Synergistic Integration of Large Language Models and Cognitive
Architectures for Robust AI: An Exploratory Analysis [12.9222727028798]
This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs)
We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence.
These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems.
arXiv Detail & Related papers (2023-08-18T21:42:47Z) - Conceptual Modeling and Artificial Intelligence: A Systematic Mapping
Study [0.5156484100374059]
In conceptual modeling (CM), humans apply abstraction to represent excerpts of reality for means of understanding and communication, and processing by machines.
Recently, a trend toward intertwining CM and AI emerged.
This systematic mapping study shows how this interdisciplinary research field is structured, which mutual benefits are gained by the intertwining, and future research directions.
arXiv Detail & Related papers (2023-03-12T21:23:46Z) - Artefact Retrieval: Overview of NLP Models with Knowledge Base Access [18.098224374478598]
This paper systematically describes the typology of artefacts (items retrieved from a knowledge base), retrieval mechanisms and the way these artefacts are fused into the model.
Most of the focus is given to language models, though we also show how question answering, fact-checking and dialogue models fit into this system as well.
arXiv Detail & Related papers (2022-01-24T13:15:33Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - Self-organizing Democratized Learning: Towards Large-scale Distributed
Learning Systems [71.14339738190202]
democratized learning (Dem-AI) lays out a holistic philosophy with underlying principles for building large-scale distributed and democratized machine learning systems.
Inspired by Dem-AI philosophy, a novel distributed learning approach is proposed in this paper.
The proposed algorithms demonstrate better results in the generalization performance of learning models in agents compared to the conventional FL algorithms.
arXiv Detail & Related papers (2020-07-07T08:34:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.