The Convergence of Schema-Guided Dialogue Systems and the Model Context Protocol
- URL: http://arxiv.org/abs/2602.18764v1
- Date: Sat, 21 Feb 2026 09:02:35 GMT
- Title: The Convergence of Schema-Guided Dialogue Systems and the Model Context Protocol
- Authors: Andreas Schlapbach,
- Abstract summary: This paper establishes a fundamental convergence: compatibility-Guided Dialogue (SGD) and the Model Context Protocol (MCP)<n>SGD and MCP represent two manifestations of a unified paradigm for deterministic, auditable LLM-agent interaction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper establishes a fundamental convergence: Schema-Guided Dialogue (SGD) and the Model Context Protocol (MCP) represent two manifestations of a unified paradigm for deterministic, auditable LLM-agent interaction. SGD, designed for dialogue-based API discovery (2019), and MCP, now the de facto standard for LLM-tool integration, share the same core insight -- that schemas can encode not just tool signatures but operational constraints and reasoning guidance. By analyzing this convergence, we extract five foundational principles for schema design: (1) Semantic Completeness over Syntactic Precision, (2) Explicit Action Boundaries, (3) Failure Mode Documentation, (4) Progressive Disclosure Compatibility, and (5) Inter-Tool Relationship Declaration. These principles reveal three novel insights: first, SGD's original design was fundamentally sound and should be inherited by MCP; second, both frameworks leave failure modes and inter-tool relationships unexploited -- gaps we identify and resolve; third, progressive disclosure emerges as a critical production-scaling insight under real-world token constraints. We provide concrete design patterns for each principle. These principles position schema-driven governance as a scalable mechanism for AI system oversight without requiring proprietary system inspection -- central to Software 3.0.
Related papers
- The Auton Agentic AI Framework [5.410458076724158]
The field of Artificial Intelligence is undergoing a transition from Generative AI to Agentic AI.<n>This transition exposes a fundamental architectural mismatch: Large Language Models (LLMs) produce unstructured outputs, whereas the backend infrastructure they must control requires deterministic, schema-conformant inputs.<n>The present paper describes the Auton Agentic AI Framework, a principled architecture for the creation, creation, and governance of autonomous agent.
arXiv Detail & Related papers (2026-02-27T06:42:08Z) - From Prompt-Response to Goal-Directed Systems: The Evolution of Agentic AI Software Architecture [0.0]
Agentic AI denotes an architectural transition from stateless, prompt-driven generative models toward goal-directed systems.<n>This paper examines this transition by connecting intelligent agent theories, with contemporary LLM-centric approaches.<n>The study identifies a convergence toward standardized agent loops, registries, and auditable control mechanisms.
arXiv Detail & Related papers (2026-02-11T03:34:48Z) - Bridging Symbolic Control and Neural Reasoning in LLM Agents: The Structured Cognitive Loop [0.0]
We introduce Structured Cognitive Loop (SCL), a modular architecture that separates agent cognition into five phases: Retrieval, Cognition, Control, Action, and Memory (R-CCAM)<n>At the core of SCL is Soft Symbolic Control, an adaptive governance mechanism that applies symbolic constraints to probabilistic inference.<n>We provide a complete open-source implementation demonstrating the R-CCAM loop architecture, alongside a live GPT-4o-powered travel planning agent.
arXiv Detail & Related papers (2025-11-21T05:19:34Z) - Subject-Event Ontology Without Global Time: Foundations and Execution Semantics [51.56484100374058]
The formalization includes nine axioms (A1-A9), ensuring the correctness of executable: monotonicity of history (I1), acyclicity of causality (I2), traceability (I3)<n>The formalization is applicable to distributed systems, microservice architectures, DLT platforms, and multiperspectivity scenarios (conflicting facts from different subjects)<n>Special attention is given to the model-based approach (A9): event validation via schemas, actor authorization, automatic construction of causal chains (W3) without global time.
arXiv Detail & Related papers (2025-10-20T19:26:44Z) - A Survey of Vibe Coding with Large Language Models [93.88284590533242]
"Vibe Coding" is a development methodology where developers validate AI-generated implementations through outcome observation.<n>Despite its transformative potential, the effectiveness of this emergent paradigm remains under-explored.<n>This survey provides the first comprehensive and systematic review of Vibe Coding with large language models.
arXiv Detail & Related papers (2025-10-14T11:26:56Z) - CIR-CoT: Towards Interpretable Composed Image Retrieval via End-to-End Chain-of-Thought Reasoning [93.05917922306196]
Composed Image Retrieval (CIR) aims to find a target image from a reference image and a modification text.<n>CIR-CoT is the first end-to-end retrieval-oriented MLLM designed to integrate explicit Chain-of-Thought (CoT) reasoning.
arXiv Detail & Related papers (2025-10-09T09:41:45Z) - LogiPlan: A Structured Benchmark for Logical Planning and Relational Reasoning in LLMs [7.012555483275226]
LogiPlan is a benchmark designed to evaluate the capabilities of large language models (LLMs) in logical planning and reasoning over complex relational structures.<n>We evaluate state-of-the-art models including DeepSeek R1, Gemini 2.0 Pro, Gemini 2 Flash Thinking, GPT-4.5, GPT-4o, Llama 3.1 405B, O3-mini, O1, and Claude 3.7 Sonnet across three tasks.
arXiv Detail & Related papers (2025-06-12T09:47:02Z) - AI4Contracts: LLM & RAG-Powered Encoding of Financial Derivative Contracts [1.3060230641655135]
Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) are reshaping how AI systems extract and organize information from unstructured text.<n>We introduce CDMizer, a template-driven, LLM, and RAG-based framework for structured text transformation.
arXiv Detail & Related papers (2025-06-01T16:05:00Z) - Topological Structure Learning Should Be A Research Priority for LLM-Based Multi-Agent Systems [69.95482609893236]
Large Language Model-based Multi-Agent Systems (MASs) have emerged as a powerful paradigm for tackling complex tasks through collaborative intelligence.<n>We call for a paradigm shift toward emphtopology-aware MASs that explicitly model and dynamically optimize the structure of inter-agent interactions.
arXiv Detail & Related papers (2025-05-28T15:20:09Z) - Universal Information Extraction as Unified Semantic Matching [54.19974454019611]
We decouple information extraction into two abilities, structuring and conceptualizing, which are shared by different tasks and schemas.
Based on this paradigm, we propose to universally model various IE tasks with Unified Semantic Matching framework.
In this way, USM can jointly encode schema and input text, uniformly extract substructures in parallel, and controllably decode target structures on demand.
arXiv Detail & Related papers (2023-01-09T11:51:31Z) - Guiding the PLMs with Semantic Anchors as Intermediate Supervision:
Towards Interpretable Semantic Parsing [57.11806632758607]
We propose to incorporate the current pretrained language models with a hierarchical decoder network.
By taking the first-principle structures as the semantic anchors, we propose two novel intermediate supervision tasks.
We conduct intensive experiments on several semantic parsing benchmarks and demonstrate that our approach can consistently outperform the baselines.
arXiv Detail & Related papers (2022-10-04T07:27:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.