Autonomous AI and Ownership Rules
- URL: http://arxiv.org/abs/2602.20169v1
- Date: Mon, 09 Feb 2026 18:58:52 GMT
- Title: Autonomous AI and Ownership Rules
- Authors: Frank Fagan,
- Abstract summary: In cases where AI is traceable to an originator, accession doctrine provides an efficient means of assigning ownership.<n>In strategic ownership dissolution, autonomous AI is intentionally designed to evade attribution, creating opportunities for tax arbitrage and regulatory avoidance.<n>To counteract these inefficiencies, bounty systems, private incentives, and government subsidies are proposed as mechanisms to encourage AI capture and prevent ownerless AI from distorting markets.
- Score: 0.09444500584367876
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This Article examines the circumstances in which AI-generated outputs remain linked to their creators and the points at which they lose that connection, whether through accident, deliberate design, or emergent behavior. In cases where AI is traceable to an originator, accession doctrine provides an efficient means of assigning ownership, preserving investment incentives while maintaining accountability. When AI becomes untraceable -- whether through carelessness, deliberate obfuscation, or emergent behavior -- first possession rules can encourage reallocation to new custodians who are incentivized to integrate AI into productive use. The analysis further explores strategic ownership dissolution, where autonomous AI is intentionally designed to evade attribution, creating opportunities for tax arbitrage and regulatory avoidance. To counteract these inefficiencies, bounty systems, private incentives, and government subsidies are proposed as mechanisms to encourage AI capture and prevent ownerless AI from distorting markets.
Related papers
- One Bad NOFO? AI Governance in Federal Grantmaking [0.2179228399562846]
U.S. agencies have an overlooked AI governance role when directing billions of dollars in federal financial assistance.<n>As discretionary grantmakers, agencies guide and restrict what grant winners do -- a hidden lever for AI governance.<n>We use a novel dataset of over 40,000 non-defense federal grant notices of funding opportunity (NOFOs) posted to the U.S. federal grants website between 2009 and 2024.
arXiv Detail & Related papers (2025-05-13T00:08:22Z) - Artificial Intelligence in Government: Why People Feel They Lose Control [44.99833362998488]
The use of Artificial Intelligence in public administration is expanding rapidly.<n>While AI promises greater efficiency and responsiveness, its integration into government functions raises concerns about fairness, transparency, and accountability.<n>This article applies principal-agent theory to AI adoption as a special case of delegation.
arXiv Detail & Related papers (2025-05-02T07:46:41Z) - The Philosophic Turn for AI Agents: Replacing centralized digital rhetoric with decentralized truth-seeking [0.0]
In the face of AI technology, individuals will increasingly rely on AI agents to navigate life's growing complexities.<n>This paper addresses a fundamental dilemma posed by AI decision-support systems: the risk of either becoming overwhelmed by complex decisions, or having autonomy compromised.
arXiv Detail & Related papers (2025-04-24T19:34:43Z) - Agentic AI: Autonomy, Accountability, and the Algorithmic Society [0.2209921757303168]
Agentic Artificial Intelligence (AI) can autonomously pursue long-term goals, make decisions, and execute complex, multi-turn.<n>This transition from advisory roles to proactive execution challenges established legal, economic, and creative frameworks.<n>We explore challenges in three interrelated domains: creativity and intellectual property, legal and ethical considerations, and competitive effects.
arXiv Detail & Related papers (2025-02-01T03:14:59Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Both eyes open: Vigilant Incentives help Regulatory Markets improve AI
Safety [69.59465535312815]
Regulatory Markets for AI is a proposal designed with adaptability in mind.
It involves governments setting outcome-based targets for AI companies to achieve.
We warn that it is alarmingly easy to stumble on incentives which would prevent Regulatory Markets from achieving this goal.
arXiv Detail & Related papers (2023-03-06T14:42:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Filling gaps in trustworthy development of AI [20.354549569362035]
Growing awareness of potential risks from AI systems has spurred action to address those risks.
But the principles often leave a gap between the "what" and the "how" of trustworthy AI development.
There is thus an urgent need for concrete methods that both enable AI developers to prevent harm and allow them to demonstrate their trustworthiness.
arXiv Detail & Related papers (2021-12-14T22:45:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.