Governing AI Agents
- URL: http://arxiv.org/abs/2501.07913v2
- Date: Tue, 11 Feb 2025 14:30:44 GMT
- Title: Governing AI Agents
- Authors: Noam Kolt,
- Abstract summary: Article looks at the economic theory of principal-agent problems and the common law doctrine of agency relationships.
It identifies problems arising from AI agents, including issues of information asymmetry, discretionary authority, and loyalty.
It argues that new technical and legal infrastructure is needed to support governance principles of inclusivity, visibility, and liability.
- Score: 0.2913760942403036
- License:
- Abstract: The field of AI is undergoing a fundamental transition from generative models that can produce synthetic content to artificial agents that can plan and execute complex tasks with only limited human involvement. Companies that pioneered the development of language models have now built AI agents that can independently navigate the internet, perform a wide range of online tasks, and increasingly serve as AI personal assistants and virtual coworkers. The opportunities presented by this new technology are tremendous, as are the associated risks. Fortunately, there exist robust analytic frameworks for confronting many of these challenges, namely, the economic theory of principal-agent problems and the common law doctrine of agency relationships. Drawing on these frameworks, this Article makes three contributions. First, it uses agency law and theory to identify and characterize problems arising from AI agents, including issues of information asymmetry, discretionary authority, and loyalty. Second, it illustrates the limitations of conventional solutions to agency problems: incentive design, monitoring, and enforcement might not be effective for governing AI agents that make uninterpretable decisions and operate at unprecedented speed and scale. Third, the Article explores the implications of agency law and theory for designing and regulating AI agents, arguing that new technical and legal infrastructure is needed to support governance principles of inclusivity, visibility, and liability.
Related papers
- Agentic AI: Autonomy, Accountability, and the Algorithmic Society [0.2209921757303168]
Agentic Artificial Intelligence (AI) can autonomously pursue long-term goals, make decisions, and execute complex, multi-turn.
This transition from advisory roles to proactive execution challenges established legal, economic, and creative frameworks.
We explore challenges in three interrelated domains: creativity and intellectual property, legal and ethical considerations, and competitive effects.
arXiv Detail & Related papers (2025-02-01T03:14:59Z) - Decentralized Governance of Autonomous AI Agents [0.0]
ETHOS is a decentralized governance (DeGov) model leveraging Web3 technologies, including blockchain, smart contracts, and decentralized autonomous organizations (DAOs)
It establishes a global registry for AI agents, enabling dynamic risk classification, proportional oversight, and automated compliance monitoring.
By integrating philosophical principles of rationality, ethical grounding, and goal alignment, ETHOS aims to create a robust research agenda for promoting trust, transparency, and participatory governance.
arXiv Detail & Related papers (2024-12-22T18:01:49Z) - Beyond the Sum: Unlocking AI Agents Potential Through Market Forces [0.0]
AI agents have the theoretical capacity to operate as independent economic actors within digital markets.
Existing digital infrastructure presents significant barriers to their participation.
We argue that addressing these infrastructure challenges represents a fundamental step toward enabling new forms of economic organization.
arXiv Detail & Related papers (2024-12-19T09:40:40Z) - Follow the money: a startup-based measure of AI exposure across occupations, industries and regions [0.0]
Existing measures of AI occupational exposure focus on AI's theoretical potential to substitute or complement human labour on the basis of technical feasibility.
We introduce the AI Startup Exposure (AISE) index-a novel metric based on occupational descriptions from O*NET and AI applications developed by startups.
Our findings suggest that AI adoption will be gradual and shaped by social factors as much as by the technical feasibility of AI applications.
arXiv Detail & Related papers (2024-12-06T10:25:05Z) - Assistive AI for Augmenting Human Decision-making [3.379906135388703]
The paper shows how AI can assist in the complex process of decision-making while maintaining human oversight.
Central to our framework are the principles of privacy, accountability, and credibility.
arXiv Detail & Related papers (2024-10-18T10:16:07Z) - Internet of Agents: Weaving a Web of Heterogeneous Agents for Collaborative Intelligence [79.5316642687565]
Existing multi-agent frameworks often struggle with integrating diverse capable third-party agents.
We propose the Internet of Agents (IoA), a novel framework that addresses these limitations.
IoA introduces an agent integration protocol, an instant-messaging-like architecture design, and dynamic mechanisms for agent teaming and conversation flow control.
arXiv Detail & Related papers (2024-07-09T17:33:24Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.