Agentic AI Governance and Lifecycle Management in Healthcare
- URL: http://arxiv.org/abs/2601.15630v1
- Date: Thu, 22 Jan 2026 04:01:41 GMT
- Title: Agentic AI Governance and Lifecycle Management in Healthcare
- Authors: Chandra Prakash, Mary Lind, Avneesh Sisodia,
- Abstract summary: Healthcare organizations are beginning to embed agentic AI into routine, including clinical documentation support and early-warning monitoring.<n>Health systems face agent sprawl, causing duplicated agents, unclear accountability, inconsistent controls, and tool permissions that persist beyond the original use case.<n>Existing AI governance frameworks emphasize lifecycle risk but provide limited guidance for the day-to-day operations of agent fleets.<n>We propose a rapid, practice-oriented synthesis of governance standards, agent security literature, and healthcare compliance requirements.<n>UALM maps recurring gaps onto five control-plane layers: (1) an identity and persona registry, (2) orchestration and cross-domain mediation, (3)
- Score: 0.6283858206409504
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Healthcare organizations are beginning to embed agentic AI into routine workflows, including clinical documentation support and early-warning monitoring. As these capabilities diffuse across departments and vendors, health systems face agent sprawl, causing duplicated agents, unclear accountability, inconsistent controls, and tool permissions that persist beyond the original use case. Existing AI governance frameworks emphasize lifecycle risk management but provide limited guidance for the day-to-day operations of agent fleets. We propose a Unified Agent Lifecycle Management (UALM) blueprint derived from a rapid, practice-oriented synthesis of governance standards, agent security literature, and healthcare compliance requirements. UALM maps recurring gaps onto five control-plane layers: (1) an identity and persona registry, (2) orchestration and cross-domain mediation, (3) PHI-bounded context and memory, (4) runtime policy enforcement with kill-switch triggers, and (5) lifecycle management and decommissioning linked to credential revocation and audit logging. A companion maturity model supports staged adoption. UALM offers healthcare CIOs, CISOs, and clinical leaders an implementable pattern for audit-ready oversight that preserves local innovation and enables safer scaling across clinical and administrative domains.
Related papers
- Aegis: Towards Governance, Integrity, and Security of AI Voice Agents [52.7512082818639]
We propose Aegis, a framework for the governance, integrity, and security of voice agents.<n>We evaluate the framework through case studies in banking call centers, IT Support, and logistics.<n>We observe systematic differences across model families, with open-weight models exhibiting higher susceptibility.
arXiv Detail & Related papers (2026-02-07T05:51:36Z) - AgentDoG: A Diagnostic Guardrail Framework for AI Agent Safety and Security [126.49733412191416]
Current guardrail models lack agentic risk awareness and transparency in risk diagnosis.<n>We propose a unified three-dimensional taxonomy that categorizes agentic risks by their source (where), failure mode (how), and consequence (what)<n>We introduce a new fine-grained agentic safety benchmark (ATBench) and a Diagnostic Guardrail framework for agent safety and security (AgentDoG)
arXiv Detail & Related papers (2026-01-26T13:45:41Z) - AgentGuardian: Learning Access Control Policies to Govern AI Agent Behavior [20.817336331051752]
AgentGuardian governs and protects AI agent operations by enforcing context-aware access-control policies.<n>It effectively detects malicious or misleading inputs while preserving normal agent functionality.
arXiv Detail & Related papers (2026-01-15T14:33:36Z) - Towards Verifiably Safe Tool Use for LLM Agents [53.55621104327779]
Large language model (LLM)-based AI agents extend capabilities by enabling access to tools such as data sources, APIs, search engines, code sandboxes, and even other agents.<n>LLMs may invoke unintended tool interactions and introduce risks, such as leaking sensitive data or overwriting critical records.<n>Current approaches to mitigate these risks, such as model-based safeguards, enhance agents' reliability but cannot guarantee system safety.
arXiv Detail & Related papers (2026-01-12T21:31:38Z) - A Blockchain-Monitored Agentic AI Architecture for Trusted Perception-Reasoning-Action Pipelines [0.0]
The application of agentic AI systems in autonomous decision-making is growing in the areas of healthcare, smart cities, digital forensics, and supply chain management.<n>The paper suggests a single architecture model comprising of LangChain-based multi-agent system with a permissioned blockchain to guarantee constant monitoring, policy enforcement, and immutable auditability of agentic action.
arXiv Detail & Related papers (2025-12-24T06:20:28Z) - VeriGuard: Enhancing LLM Agent Safety via Verified Code Generation [40.594947933580464]
The deployment of autonomous AI agents in sensitive domains, such as healthcare, introduces critical risks to safety, security, and privacy.<n>We introduce VeriGuard, a novel framework that provides formal safety guarantees for LLM-based agents.
arXiv Detail & Related papers (2025-10-03T04:11:43Z) - Diagnose, Localize, Align: A Full-Stack Framework for Reliable LLM Multi-Agent Systems under Instruction Conflicts [75.20929587906228]
Large Language Model (LLM)-powered multi-agent systems (MAS) have rapidly advanced collaborative reasoning, tool use, and role-specialized coordination in complex tasks.<n>However, reliability-critical deployment remains hindered by a systemic failure mode: hierarchical compliance under instruction conflicts.
arXiv Detail & Related papers (2025-09-27T08:43:34Z) - Beyond Jailbreaking: Auditing Contextual Privacy in LLM Agents [43.303548143175256]
This study proposes an auditing framework for conversational privacy that quantifies an agent's susceptibility to risks.<n>The proposed Conversational Manipulation for Privacy Leakage (CMPL) framework is designed to stress-test agents that enforce strict privacy directives.
arXiv Detail & Related papers (2025-06-11T20:47:37Z) - LLM Agents Should Employ Security Principles [60.03651084139836]
This paper argues that the well-established design principles in information security should be employed when deploying Large Language Model (LLM) agents at scale.<n>We introduce AgentSandbox, a conceptual framework embedding these security principles to provide safeguards throughout an agent's life-cycle.
arXiv Detail & Related papers (2025-05-29T21:39:08Z) - A Novel Zero-Trust Identity Framework for Agentic AI: Decentralized Authentication and Fine-Grained Access Control [7.228060525494563]
This paper posits the imperative for a novel Agentic AI IAM framework.<n>We propose a comprehensive framework built upon rich, verifiable Agent Identities (IDs)<n>We also explore how Zero-Knowledge Proofs (ZKPs) enable privacy-preserving attribute disclosure and verifiable policy compliance.
arXiv Detail & Related papers (2025-05-25T20:21:55Z) - Towards a HIPAA Compliant Agentic AI System in Healthcare [3.6185342807265415]
This paper introduces a HIPAA-compliant Agentic AI framework that enforces regulatory compliance through dynamic, context-aware policy enforcement.<n>Our framework integrates three core mechanisms: (1) Attribute-Based Access Control (ABAC) for granular governance, (2) a hybrid PHI sanitization pipeline combining patterns and BERT-based model to minimize leakage, and (3) immutable audit trails for compliance verification.
arXiv Detail & Related papers (2025-04-24T15:38:20Z) - Agentic Business Process Management: Practitioner Perspectives on Agent Governance in Business Processes [0.7270112855088837]
With the rise of generative AI, industry interest in software agents is growing.<n>This paper investigates how organizations can effectively govern AI agents.<n>It outlines six key recommendations for the responsible adoption of AI agents.
arXiv Detail & Related papers (2025-03-23T20:15:24Z) - SOPBench: Evaluating Language Agents at Following Standard Operating Procedures and Constraints [59.645885492637845]
SOPBench is an evaluation pipeline that transforms each service-specific SOP code program into a directed graph of executable functions.<n>Our approach transforms each service-specific SOP code program into a directed graph of executable functions and requires agents to call these functions based on natural language SOP descriptions.<n>We evaluate 18 leading models, and results show the task is challenging even for top-tier models.
arXiv Detail & Related papers (2025-03-11T17:53:02Z) - Agent-as-a-Judge: Evaluate Agents with Agents [61.33974108405561]
We introduce the Agent-as-a-Judge framework, wherein agentic systems are used to evaluate agentic systems.
This is an organic extension of the LLM-as-a-Judge framework, incorporating agentic features that enable intermediate feedback for the entire task-solving process.
We present DevAI, a new benchmark of 55 realistic automated AI development tasks.
arXiv Detail & Related papers (2024-10-14T17:57:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.