Agentic AI: Autonomy, Accountability, and the Algorithmic Society
- URL: http://arxiv.org/abs/2502.00289v3
- Date: Sat, 15 Feb 2025 13:11:05 GMT
- Title: Agentic AI: Autonomy, Accountability, and the Algorithmic Society
- Authors: Anirban Mukherjee, Hannah Hanwen Chang,
- Abstract summary: Agentic Artificial Intelligence (AI) can autonomously pursue long-term goals, make decisions, and execute complex, multi-turn.<n>This transition from advisory roles to proactive execution challenges established legal, economic, and creative frameworks.<n>We explore challenges in three interrelated domains: creativity and intellectual property, legal and ethical considerations, and competitive effects.
- Score: 0.2209921757303168
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Agentic Artificial Intelligence (AI) can autonomously pursue long-term goals, make decisions, and execute complex, multi-turn workflows. Unlike traditional generative AI, which responds reactively to prompts, agentic AI proactively orchestrates processes, such as autonomously managing complex tasks or making real-time decisions. This transition from advisory roles to proactive execution challenges established legal, economic, and creative frameworks. In this paper, we explore challenges in three interrelated domains: creativity and intellectual property, legal and ethical considerations, and competitive effects. Central to our analysis is the tension between novelty and usefulness in AI-generated creative outputs, as well as the intellectual property and authorship challenges arising from AI autonomy. We highlight gaps in responsibility attribution and liability that create a "moral crumple zone"--a condition where accountability is diffused across multiple actors, leaving end-users and developers in precarious legal and ethical positions. We examine the competitive dynamics of two-sided algorithmic markets, where both sellers and buyers deploy AI agents, potentially mitigating or amplifying tacit collusion risks. We explore the potential for emergent self-regulation within networks of agentic AI--the development of an "algorithmic society"--raising critical questions: To what extent would these norms align with societal values? What unintended consequences might arise? How can transparency and accountability be ensured? Addressing these challenges will necessitate interdisciplinary collaboration to redefine legal accountability, align AI-driven choices with stakeholder values, and maintain ethical safeguards. We advocate for frameworks that balance autonomy with accountability, ensuring all parties can harness agentic AI's potential while preserving trust, fairness, & societal welfare.
Related papers
- Ethical Challenges of Using Artificial Intelligence in Judiciary [0.0]
AI has the potential to revolutionize the functioning of the judiciary and the dispensation of justice.
Courts around the world have begun embracing AI technology as a means to enhance the administration of justice.
However, the use of AI in the judiciary poses a range of ethical challenges.
arXiv Detail & Related papers (2025-04-27T15:51:56Z) - Artificial Intelligence (AI) and the Relationship between Agency, Autonomy, and Moral Patiency [0.0]
We argue that while current AI systems are highly sophisticated, they lack genuine agency and autonomy.
We do not rule out the possibility of future systems that could achieve a limited form of artificial moral agency without consciousness.
arXiv Detail & Related papers (2025-04-11T03:48:40Z) - Governing AI Agents [0.2913760942403036]
Companies that pioneered the development of generative AI tools are now building AI agents.<n>This Article uses agency law and theory to identify and characterize problems arising from AI agents.<n>It argues that new technical and legal infrastructure is needed to support governance principles of inclusivity, visibility, and liability.
arXiv Detail & Related papers (2025-01-14T07:55:18Z) - Decentralized Governance of Autonomous AI Agents [0.0]
ETHOS is a decentralized governance (DeGov) model leveraging Web3 technologies, including blockchain, smart contracts, and decentralized autonomous organizations (DAOs)<n>It establishes a global registry for AI agents, enabling dynamic risk classification, proportional oversight, and automated compliance monitoring.<n>By integrating philosophical principles of rationality, ethical grounding, and goal alignment, ETHOS aims to create a robust research agenda for promoting trust, transparency, and participatory governance.
arXiv Detail & Related papers (2024-12-22T18:01:49Z) - Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground [55.2480439325792]
I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms.
I question the current narrow prioritization in AI ethics of moral innovation over moral preservation.
arXiv Detail & Related papers (2024-12-06T15:36:13Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems [2.444630714797783]
We review and discuss the intricacies of AI biases, definitions, methods of detection and mitigation, and metrics for evaluating bias.
We also discuss open challenges with regard to the trustworthiness and widespread application of AI across diverse domains of human-centric decision making.
arXiv Detail & Related papers (2024-08-28T06:04:25Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Artificial Intelligence and Dual Contract [2.1756081703276]
We develop a model where two principals, each equipped with independent Q-learning algorithms, interact with a single agent.
Our findings reveal that the strategic behavior of AI principals hinges crucially on the alignment of their profits.
arXiv Detail & Related papers (2023-03-22T07:31:44Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.