Bridging Symbolic Control and Neural Reasoning in LLM Agents: The Structured Cognitive Loop
- URL: http://arxiv.org/abs/2511.17673v2
- Date: Tue, 25 Nov 2025 01:09:40 GMT
- Title: Bridging Symbolic Control and Neural Reasoning in LLM Agents: The Structured Cognitive Loop
- Authors: Myung Ho Kim,
- Abstract summary: We introduce Structured Cognitive Loop (SCL), a modular architecture that separates agent cognition into five phases: Retrieval, Cognition, Control, Action, and Memory (R-CCAM)<n>At the core of SCL is Soft Symbolic Control, an adaptive governance mechanism that applies symbolic constraints to probabilistic inference.<n>We provide a complete open-source implementation demonstrating the R-CCAM loop architecture, alongside a live GPT-4o-powered travel planning agent.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large language model agents suffer from fundamental architectural problems: entangled reasoning and execution, memory volatility, and uncontrolled action sequences. We introduce Structured Cognitive Loop (SCL), a modular architecture that explicitly separates agent cognition into five phases: Retrieval, Cognition, Control, Action, and Memory (R-CCAM). At the core of SCL is Soft Symbolic Control, an adaptive governance mechanism that applies symbolic constraints to probabilistic inference, preserving neural flexibility while restoring the explainability and controllability of classical symbolic systems. Through empirical validation on multi-step conditional reasoning tasks, we demonstrate that SCL achieves zero policy violations, eliminates redundant tool calls, and maintains complete decision traceability. These results address critical gaps in existing frameworks such as ReAct, AutoGPT, and memory-augmented approaches. Our contributions are threefold: (1) we situate SCL within the taxonomy of hybrid intelligence, differentiating it from prompt-centric and memory-only approaches; (2) we formally define Soft Symbolic Control and contrast it with neuro-symbolic AI; and (3) we derive three design principles for trustworthy agents: modular decomposition, adaptive symbolic governance, and transparent state management. We provide a complete open-source implementation demonstrating the R-CCAM loop architecture, alongside a live GPT-4o-powered travel planning agent. By connecting expert system principles with modern LLM capabilities, this work offers a practical and theoretically grounded path toward reliable, explainable, and governable AI agents.
Related papers
- The Convergence of Schema-Guided Dialogue Systems and the Model Context Protocol [0.0]
This paper establishes a fundamental convergence: compatibility-Guided Dialogue (SGD) and the Model Context Protocol (MCP)<n>SGD and MCP represent two manifestations of a unified paradigm for deterministic, auditable LLM-agent interaction.
arXiv Detail & Related papers (2026-02-21T09:02:35Z) - Differentiable Modal Logic for Multi-Agent Diagnosis, Orchestration and Communication [0.15229257192293197]
This tutorial demonstrates differentiable modal logic (DML), implemented via Modal Logical Neural Networks (MLNNs)<n>We present a unified neurosymbolic debug framework through four modalities: epistemic (who to trust), temporal (when events cause failures), deontic (what actions are permitted) and doxastic (how to interpret agent confidence)<n>Key contributions for the neurosymbolic community: (1) interpretable learned structures where trust and causality are explicit parameters, not opaque embeddings; (2) knowledge injection via differentiable axioms that guide learning with sparse data; and (4) practical deployment patterns for monitoring, active control and communication of
arXiv Detail & Related papers (2026-02-12T15:39:18Z) - Why Does the LLM Stop Computing: An Empirical Study of User-Reported Failures in Open-Source LLMs [50.075587392477935]
We conduct the first large-scale empirical study of 705 real-world failures from the open-source DeepSeek, Llama, and Qwen ecosystems.<n>Our analysis reveals a paradigm shift: white-box orchestration relocates the reliability bottleneck from model algorithmic defects to the systemic fragility of the deployment stack.
arXiv Detail & Related papers (2026-01-20T06:42:56Z) - CoG: Controllable Graph Reasoning via Relational Blueprints and Failure-Aware Refinement over Knowledge Graphs [53.199517625701475]
CoG is a training-free framework inspired by Dual-Process Theory that mimics the interplay between intuition and deliberation.<n>CoG significantly outperforms state-of-the-art approaches in both accuracy and efficiency.
arXiv Detail & Related papers (2026-01-16T07:27:40Z) - The Path Ahead for Agentic AI: Challenges and Opportunities [4.52683540940001]
This chapter examines the emergence of agentic AI systems that operate autonomously in complex environments.<n>We trace the architectural progression from statistical models to transformer-based systems, identifying capabilities that enable agentic behavior.<n>Unlike existing surveys, we focus on the architectural transition from language understanding to autonomous action, emphasizing the technical gaps that must be resolved before deployment.
arXiv Detail & Related papers (2026-01-06T06:31:42Z) - PISA: A Pragmatic Psych-Inspired Unified Memory System for Enhanced AI Agency [50.712873697511206]
Existing work often lacks adaptability to diverse tasks and overlooks the constructive and task-oriented role of AI agent memory.<n>We propose PISA, a pragmatic, psych-inspired unified memory system that treats memory as a constructive and adaptive process.<n>Our empirical evaluation, conducted on the existing LOCOMO benchmark and our newly proposed AggQA benchmark for data analysis tasks, confirms that PISA sets a new state-of-the-art by significantly enhancing adaptability and long-term knowledge retention.
arXiv Detail & Related papers (2025-10-12T10:34:35Z) - CIR-CoT: Towards Interpretable Composed Image Retrieval via End-to-End Chain-of-Thought Reasoning [93.05917922306196]
Composed Image Retrieval (CIR) aims to find a target image from a reference image and a modification text.<n>CIR-CoT is the first end-to-end retrieval-oriented MLLM designed to integrate explicit Chain-of-Thought (CoT) reasoning.
arXiv Detail & Related papers (2025-10-09T09:41:45Z) - CAM: A Constructivist View of Agentic Memory for LLM-Based Reading Comprehension [55.29309306566238]
Current Large Language Models (LLMs) are confronted with overwhelming information volume when comprehending long-form documents.<n>This challenge raises the imperative of a cohesive memory module, which can elevate vanilla LLMs into autonomous reading agents.<n>We draw inspiration from Jean Piaget's Constructivist Theory, illuminating three traits of the agentic memory -- structured schemata, flexible assimilation, and dynamic accommodation.
arXiv Detail & Related papers (2025-10-07T02:16:30Z) - Memory Management and Contextual Consistency for Long-Running Low-Code Agents [0.0]
This paper proposes a novel hybrid memory system designed specifically for LCNC agents.<n>Inspired by cognitive science, our architecture combines episodic and semantic memory components with a proactive "Intelligent Decay" mechanism.<n>Key innovation is a user-centric visualization interface, aligned with the LCNC paradigm, which allows non-technical users to manage the agent's memory directly.
arXiv Detail & Related papers (2025-09-27T08:01:26Z) - The Landscape of Agentic Reinforcement Learning for LLMs: A Survey [103.32591749156416]
The emergence of agentic reinforcement learning (Agentic RL) marks a paradigm shift from conventional reinforcement learning applied to large language models (LLM RL)<n>This survey formalizes this conceptual shift by contrasting the degenerate single-step Markov Decision Processes (MDPs) of LLM-RL with the temporally extended, partially observable Markov decision processes (POMDPs) that define Agentic RL.
arXiv Detail & Related papers (2025-09-02T17:46:26Z) - Emergent Cognitive Convergence via Implementation: A Structured Loop Reflecting Four Theories of Mind [0.0]
We report a structural convergence among four influential theories of mind.<n>The convergence emerges unintentionally within a practical AI architecture known as Agentic Flow.<n>This paper proposes that intelligent architectures may evolve toward shared structural patterns shaped by practical constraints.
arXiv Detail & Related papers (2025-07-22T02:54:45Z) - Cognitive Load-Aware Inference: A Neuro-Symbolic Framework for Optimizing the Token Economy of Large Language Models [0.9790236766474201]
This paper introduces the Cognitive Load-Aware Inference (CLAI) framework, which operationalizes principles from Cognitive Load Theory (CLT) and neuroscience for Large Language Model (LLM) inference.<n>We formalize the concepts of Intrinsic Cognitive Load, Extraneous Cognitive Load, and Germane Cognitive Load into quantifiable LLM metrics.<n>We propose two implementation paths: CLAI-Prompt, a zero-shot method that guides a base LLM through cognitive control steps via a structured meta-prompt, and CLAI-Tune, a fine-tuned model that internalizes these principles for spontaneous cognitive economy.
arXiv Detail & Related papers (2025-07-01T10:51:18Z) - Beyond 'Aha!': Toward Systematic Meta-Abilities Alignment in Large Reasoning Models [86.88657425848547]
Large reasoning models (LRMs) already possess a latent capacity for long chain-of-thought reasoning.<n>We explicitly align models with three meta-abilities: deduction, induction, and abduction, using automatically generated, self-verifiable tasks.<n>Our three stage-pipeline individual alignment, parameter-space merging, and domain-specific reinforcement learning, boosts performance by over 10% relative to instruction-tuned baselines.
arXiv Detail & Related papers (2025-05-15T17:58:33Z) - ActPC-Chem: Discrete Active Predictive Coding for Goal-Guided Algorithmic Chemistry as a Potential Cognitive Kernel for Hyperon & PRIMUS-Based AGI [0.0]
We explore a novel paradigm (labeled ActPC-Chem) for biologically inspired, goal-guided artificial intelligence (AI)<n>ActPC is centered on a form of Discrete Active Predictive Coding (ActPC) operating within an algorithmic chemistry of rewrite rules.
arXiv Detail & Related papers (2024-12-21T09:14:25Z) - Compromising Embodied Agents with Contextual Backdoor Attacks [69.71630408822767]
Large language models (LLMs) have transformed the development of embodied intelligence.
This paper uncovers a significant backdoor security threat within this process.
By poisoning just a few contextual demonstrations, attackers can covertly compromise the contextual environment of a black-box LLM.
arXiv Detail & Related papers (2024-08-06T01:20:12Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.