Inherent and emergent liability issues in LLM-based agentic systems: a principal-agent perspective
- URL: http://arxiv.org/abs/2504.03255v1
- Date: Fri, 04 Apr 2025 08:10:02 GMT
- Title: Inherent and emergent liability issues in LLM-based agentic systems: a principal-agent perspective
- Authors: Garry A. Gabison, R. Patrick Xian,
- Abstract summary: Agentic systems powered by large language models (LLMs) are becoming progressively more complex and capable.<n>Their increasing agency and expanding deployment settings attract growing attention over effective governance policies, monitoring and control protocols.<n>We analyze the potential liability issues stemming from delegated use of LLM agents and their extended systems from a principal-agent perspective.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Agentic systems powered by large language models (LLMs) are becoming progressively more complex and capable. Their increasing agency and expanding deployment settings attract growing attention over effective governance policies, monitoring and control protocols. Based on emerging landscapes of the agentic market, we analyze the potential liability issues stemming from delegated use of LLM agents and their extended systems from a principal-agent perspective. Our analysis complements existing risk-based studies on artificial agency and covers the spectrum of important aspects of the principal-agent relationship and their potential consequences at deployment. Furthermore, we motivate method developments for technical governance along the directions of interpretability and behavior evaluations, reward and conflict management, and the mitigation of misalignment and misconduct through principled engineering of detection and fail-safe mechanisms. By illustrating the outstanding issues in AI liability for LLM-based agentic systems, we aim to inform the system design, auditing and monitoring approaches to enhancing transparency and accountability.
Related papers
- Agentic Business Process Management: The Past 30 Years And Practitioners' Future Perspectives [0.7270112855088837]
We conduct a series of interviews with BPM practitioners to explore their understanding, expectations, and concerns related to agent autonomy, adaptability, human collaboration, and governance in processes.
The findings reflect both challenges with respect to data inconsistencies, manual interventions, identification of process bottlenecks, actionability of process improvements, as well as the opportunities of enhanced efficiency, predictive process insights and proactive decision-making support.
These concerns underscore the need for a robust methodological framework for managing agents in organizations.
arXiv Detail & Related papers (2025-03-23T20:15:24Z) - Towards Agentic Recommender Systems in the Era of Multimodal Large Language Models [75.4890331763196]
Recent breakthroughs in Large Language Models (LLMs) have led to the emergence of agentic AI systems.
LLM-based Agentic RS (LLM-ARS) can offer more interactive, context-aware, and proactive recommendations.
arXiv Detail & Related papers (2025-03-20T22:37:15Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - A Survey on Trustworthy LLM Agents: Threats and Countermeasures [67.23228612512848]
Large Language Models (LLMs) and Multi-agent Systems (MAS) have significantly expanded the capabilities of LLM ecosystems.
We propose the TrustAgent framework, a comprehensive study on the trustworthiness of agents.
arXiv Detail & Related papers (2025-03-12T08:42:05Z) - Multi-Agent Risks from Advanced AI [90.74347101431474]
Multi-agent systems of advanced AI pose novel and under-explored risks.<n>We identify three key failure modes based on agents' incentives, as well as seven key risk factors.<n>We highlight several important instances of each risk, as well as promising directions to help mitigate them.
arXiv Detail & Related papers (2025-02-19T23:03:21Z) - Position: Towards a Responsible LLM-empowered Multi-Agent Systems [22.905804138387854]
The rise of Agent AI and Large Language Model-powered Multi-Agent Systems (LLM-MAS) has underscored the need for responsible and dependable system operation.
These advancements introduce critical challenges: LLM agents exhibit inherent unpredictability, and uncertainties in their outputs can compound, threatening system stability.
To address these risks, a human-centered design approach with active dynamic moderation is essential.
arXiv Detail & Related papers (2025-02-03T16:04:30Z) - AgentOps: Enabling Observability of LLM Agents [12.49728300301026]
Large language model (LLM) agents raise significant concerns on AI safety due to their autonomous and non-deterministic behavior.
We present a comprehensive taxonomy of AgentOps, identifying the artifacts and associated data that should be traced throughout the entire lifecycle of agents to achieve effective observability.
Our taxonomy serves as a reference template for developers to design and implement AgentOps infrastructure that supports monitoring, logging, and analytics.
arXiv Detail & Related papers (2024-11-08T02:31:03Z) - Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science [65.77763092833348]
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
While their capabilities are promising, these agents also introduce novel vulnerabilities that demand careful consideration for safety.
This paper conducts a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on potential risks associated with their misuse and emphasizing the need for safety measures.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - AgentBoard: An Analytical Evaluation Board of Multi-turn LLM Agents [74.16170899755281]
We introduce AgentBoard, a pioneering comprehensive benchmark and accompanied open-source evaluation framework tailored to analytical evaluation of LLM agents.<n>AgentBoard offers a fine-grained progress rate metric that captures incremental advancements as well as a comprehensive evaluation toolkit.<n>This not only sheds light on the capabilities and limitations of LLM agents but also propels the interpretability of their performance to the forefront.
arXiv Detail & Related papers (2024-01-24T01:51:00Z) - Visibility into AI Agents [9.067567737098594]
Increased delegation of commercial, scientific, governmental, and personal activities to AI agents may exacerbate existing societal risks.
We assess three categories of measures to increase visibility into AI agents: agent identifiers, real-time monitoring, and activity logging.
arXiv Detail & Related papers (2024-01-23T23:18:33Z) - Global and Local Analysis of Interestingness for Competency-Aware Deep
Reinforcement Learning [0.0]
We extend a recently-proposed framework for explainable reinforcement learning (RL) based on analyses of "interestingness"
Our tools provide insights about RL agent competence, both their capabilities and limitations, enabling users to make more informed decisions.
arXiv Detail & Related papers (2022-11-11T17:48:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.