Agentic Problem Frames: A Systematic Approach to Engineering Reliable Domain Agents
- URL: http://arxiv.org/abs/2602.19065v1
- Date: Sun, 22 Feb 2026 06:32:32 GMT
- Title: Agentic Problem Frames: A Systematic Approach to Engineering Reliable Domain Agents
- Authors: Chanjin Park,
- Abstract summary: Large Language Models (LLMs) are evolving into autonomous agents, yet current "frameless" development--relying on ambiguous natural language--leads to critical risks such as scope creep and open-loop failures.<n>This study proposes Agentic Problem Frames (APF), a systematic engineering framework that shifts focus from internal model intelligence to the structured interaction between the agent and its environment.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) are evolving into autonomous agents, yet current "frameless" development--relying on ambiguous natural language without engineering blueprints--leads to critical risks such as scope creep and open-loop failures. To ensure industrial-grade reliability, this study proposes Agentic Problem Frames (APF), a systematic engineering framework that shifts focus from internal model intelligence to the structured interaction between the agent and its environment. The APF establishes a dynamic specification paradigm where intent is concretized at runtime through domain knowledge injection. At its core, the Act-Verify-Refine (AVR) loop functions as a closed-loop control system that transforms execution results into verified knowledge assets, driving system behavior toward asymptotic convergence to mission requirements (R). To operationalize this, this study introduces the Agentic Job Description (AJD), a formal specification tool that defines jurisdictional boundaries, operational contexts, and epistemic evaluation criteria. The efficacy of this framework is validated through two contrasting case studies: a delegated proxy model for business travel and an autonomous supervisor model for industrial equipment management. By applying AJD-based specification and APF modeling to these scenarios, the analysis demonstrates how operational scenarios are systematically controlled within defined boundaries. These cases provide a conceptual proof that agent reliability stems not from a model's internal reasoning alone, but from the rigorous engineering structures that anchor stochastic AI within deterministic business processes, thereby enabling the development of verifiable and dependable domain agents.
Related papers
- Decoding ML Decision: An Agentic Reasoning Framework for Large-Scale Ranking System [26.405948122941467]
We present GEARS, a framework that reframes ranking optimization as an autonomous discovery process.<n>We show that GEARS consistently identifies superior, near-Pareto-efficient policies by synergizing algorithmic signals with deep ranking context.
arXiv Detail & Related papers (2026-02-20T22:24:01Z) - The Why Behind the Action: Unveiling Internal Drivers via Agentic Attribution [63.61358761489141]
Large Language Model (LLM)-based agents are widely used in real-world applications such as customer service, web navigation, and software engineering.<n>We propose a novel framework for textbfgeneral agentic attribution, designed to identify the internal factors driving agent actions regardless of the task outcome.<n>We validate our framework across a diverse suite of agentic scenarios, including standard tool use and subtle reliability risks like memory-induced bias.
arXiv Detail & Related papers (2026-01-21T15:22:21Z) - Institutional AI: A Governance Framework for Distributional AGI Safety [1.3763052684269788]
We identify three structural problems that emerge from core properties of AI models.<n>The solution is Institutional AI, a system-level approach that treats alignment as a question of effective governance of AI agent collectives.
arXiv Detail & Related papers (2026-01-15T17:08:26Z) - Towards Verifiably Safe Tool Use for LLM Agents [53.55621104327779]
Large language model (LLM)-based AI agents extend capabilities by enabling access to tools such as data sources, APIs, search engines, code sandboxes, and even other agents.<n>LLMs may invoke unintended tool interactions and introduce risks, such as leaking sensitive data or overwriting critical records.<n>Current approaches to mitigate these risks, such as model-based safeguards, enhance agents' reliability but cannot guarantee system safety.
arXiv Detail & Related papers (2026-01-12T21:31:38Z) - Assured Autonomy: How Operations Research Powers and Orchestrates Generative AI Systems [18.881800772626427]
We argue generative models can be fragile in operational domains unless paired with mechanisms that provide feasibility, robustness to distribution shift, and stress testing.<n>We develop a conceptual framework for assured autonomy grounded in operations research.<n>These elements define a research agenda for assured autonomy in safety-critical, reliability-sensitive operational domains.
arXiv Detail & Related papers (2025-12-30T04:24:06Z) - AgentGuard: Runtime Verification of AI Agents [1.14219428942199]
AgentGuard is a framework for runtime verification of Agentic AI systems.<n>It provides continuous, quantitative assurance through a new paradigm called Dynamic Probabilistic Assurance.
arXiv Detail & Related papers (2025-09-28T13:08:50Z) - The Landscape of Agentic Reinforcement Learning for LLMs: A Survey [103.32591749156416]
The emergence of agentic reinforcement learning (Agentic RL) marks a paradigm shift from conventional reinforcement learning applied to large language models (LLM RL)<n>This survey formalizes this conceptual shift by contrasting the degenerate single-step Markov Decision Processes (MDPs) of LLM-RL with the temporally extended, partially observable Markov decision processes (POMDPs) that define Agentic RL.
arXiv Detail & Related papers (2025-09-02T17:46:26Z) - Formalizing Operational Design Domains with the Pkl Language [0.4349640169711269]
The deployment of automated functions that can operate without direct human supervision has changed safety evaluation in domains seeking higher levels of automation.<n>To make a convincing safety claim, the developer must present a thorough justification argument, supported by evidence, that a function is free from unreasonable risk when operated in its intended context.<n>This paper presents a way to formalize an Operational Design Domain specification (ODD) in the Pkl language.
arXiv Detail & Related papers (2025-09-02T11:41:27Z) - A Comprehensive Survey of Self-Evolving AI Agents: A New Paradigm Bridging Foundation Models and Lifelong Agentic Systems [53.37728204835912]
Most existing AI systems rely on manually crafted configurations that remain static after deployment.<n>Recent research has explored agent evolution techniques that aim to automatically enhance agent systems based on interaction data and environmental feedback.<n>This survey aims to provide researchers and practitioners with a systematic understanding of self-evolving AI agents.
arXiv Detail & Related papers (2025-08-10T16:07:32Z) - AI in a vat: Fundamental limits of efficient world modelling for agent sandboxing and interpretability [84.52205243353761]
Recent work proposes using world models to generate controlled virtual environments in which AI agents can be tested before deployment.<n>We investigate ways of simplifying world models that remain agnostic to the AI agent under evaluation.
arXiv Detail & Related papers (2025-04-06T20:35:44Z) - SOPBench: Evaluating Language Agents at Following Standard Operating Procedures and Constraints [59.645885492637845]
SOPBench is an evaluation pipeline that transforms each service-specific SOP code program into a directed graph of executable functions.<n>Our approach transforms each service-specific SOP code program into a directed graph of executable functions and requires agents to call these functions based on natural language SOP descriptions.<n>We evaluate 18 leading models, and results show the task is challenging even for top-tier models.
arXiv Detail & Related papers (2025-03-11T17:53:02Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.