The Digital Gorilla: Rebalancing Power in the Age of AI
- URL: http://arxiv.org/abs/2602.20080v1
- Date: Mon, 23 Feb 2026 17:46:54 GMT
- Title: The Digital Gorilla: Rebalancing Power in the Age of AI
- Authors: M. Alejandra Parra-Orlandoni, Roxanne A. Schnyder, Christopher J. Mallet,
- Abstract summary: Article offers a conceptual foundation for AI governance by treating such systems as a fourth societal actor.<n>It develops a Four Societal Actors framework that maps how power flows among these actors across five power modalities.<n>It advances a federalized, polycentric governance architecture and institutionalizes dynamic checks and balances.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Contemporary artificial intelligence (AI) policy suffers from a basic categorical error. Existing frameworks rely on analogizing AI to inherited technology types -- such as products, platforms, or infrastructure -- and in doing so generate overlapping, often contradictory governance regimes. This "analogy trap" obscures a fundamental transformation: certain advanced AI systems no longer function solely as instruments through which existing institutions exercise power, but as de facto centers of power that shape information, coordinate behavior, and structure social and economic realities at scale. This article offers a new conceptual foundation for AI governance by treating such systems as a fourth societal actor -- what we term the "Digital Gorilla" -- alongside People, the State, and Enterprises. It develops a Four Societal Actors framework that maps how power flows among these actors across five power modalities (economic, epistemic, narrative, authoritative, physical) and uses this map to diagnose where AI capabilities disturb established allocations of authority, concentrate power, or erode accountability. Drawing on constitutional principles of separated powers and federalism, the article advances a federalized, polycentric governance architecture and institutionalizes dynamic checks and balances among the four actors, rather than today's more reactive and compliance-driven approaches. Reframing AI governance in this way shifts the inquiry from how to control a risky technology to how to design institutions capable of accommodating these increasingly powerful and autonomous digital systems without sacrificing democratic legitimacy, the rule of law, or the production of public goods, and it recasts familiar debates in administrative, constitutional, and corporate law as questions of power allocation in a four-actor system.
Related papers
- Algorithmic Governance in the United States: A Multi-Level Case Analysis of AI Deployment Across Federal, State, and Municipal Authorities [0.0]
This study examines how AI is used across federal, state, and municipal levels in the United States.<n>At the federal level, AI is most often institutionalized as a tool for high-stakes control.<n>State governments occupy a more ambiguous middle ground, where AI frequently combines supportive functions with algorithmic gatekeeping.
arXiv Detail & Related papers (2026-02-09T14:36:32Z) - Will Power Return to the Clouds? From Divine Authority to GenAI Authority [0.2864713389096699]
Generative AI systems now mediate newsfeeds, search rankings, and creative content for hundreds of millions of users.<n>This article juxtaposes the Galileo Affair, a touchstone of clerical knowledge control, with contemporary Big-Tech content moderation.
arXiv Detail & Related papers (2025-11-27T18:59:44Z) - Democracy-in-Silico: Institutional Design as Alignment in AI-Governed Polities [2.1485350418225244]
Democracy-in-Silico is an agent-based simulation where societies of advanced AI agents govern themselves under different institutional frameworks.<n>We explore what it means to be human in an age of AI by tasking Large Language Models (LLMs) to embody agents with traumatic memories, hidden agendas, and psychological triggers.<n>We present a novel metric, the Power-Preservation Index (PPI), to quantify misaligned behavior where agents prioritize their own power over public welfare.
arXiv Detail & Related papers (2025-08-27T04:44:41Z) - When Autonomy Goes Rogue: Preparing for Risks of Multi-Agent Collusion in Social Systems [78.04679174291329]
We introduce a proof-of-concept to simulate the risks of malicious multi-agent systems (MAS)<n>We apply this framework to two high-risk fields: misinformation spread and e-commerce fraud.<n>Our findings show that decentralized systems are more effective at carrying out malicious actions than centralized ones.
arXiv Detail & Related papers (2025-07-19T15:17:30Z) - Societal and technological progress as sewing an ever-growing, ever-changing, patchy, and polychrome quilt [44.522343543870804]
We are worried that such systems, which overlook enduring moral diversity, will spark resistance, erode trust, and destabilize our institutions.<n>This paper traces the underlying problem to an often-unstated Axiom of Rational Convergence: the idea that under ideal conditions, rational agents will converge in the limit of conversation on a single ethics.<n>Treating that premise as both optional and doubtful, we propose what we call the appropriateness framework: an alternative approach grounded in conflict theory, cultural evolution, multi-agent systems, and institutional economics.
arXiv Detail & Related papers (2025-05-08T12:55:07Z) - Artificial Intelligence in Government: Why People Feel They Lose Control [44.99833362998488]
The use of Artificial Intelligence in public administration is expanding rapidly.<n>While AI promises greater efficiency and responsiveness, its integration into government functions raises concerns about fairness, transparency, and accountability.<n>This article applies principal-agent theory to AI adoption as a special case of delegation.
arXiv Detail & Related papers (2025-05-02T07:46:41Z) - Authoritarian Recursions: How Fiction, History, and AI Reinforce Control in Education, Warfare, and Discourse [0.0]
Article theorizes how AI systems consolidate institutional control across education, warfare, and digital discourse.<n>Case studies are analyzed alongside cultural imaginaries such as Orwell's textitNineteen Eighty-Four, Skynet, and textitBlack Mirror, used as tools to surface ethical blind spots.
arXiv Detail & Related papers (2025-04-12T01:01:26Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - AGI, Governments, and Free Societies [0.0]
We argue that AGI poses distinct risks of pushing societies toward either a 'despotic Leviathan' or an 'absent Leviathan'<n>We analyze how these dynamics could unfold through three key channels.<n> Enhanced state capacity through AGI could enable unprecedented surveillance and control, potentially entrenching authoritarian practices.<n>Conversely, rapid diffusion of AGI capabilities to non-state actors could undermine state legitimacy and governability.
arXiv Detail & Related papers (2025-02-14T03:55:38Z) - Responsible Artificial Intelligence Systems: A Roadmap to Society's Trust through Trustworthy AI, Auditability, Accountability, and Governance [37.10526074040908]
This paper explores the concept of a responsible AI system from a holistic perspective.<n>The final goal of the paper is to propose a roadmap in the design of responsible AI systems.
arXiv Detail & Related papers (2025-02-04T14:47:30Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - A Framework for Understanding AI-Induced Field Change: How AI
Technologies are Legitimized and Institutionalized [0.0]
This paper presents a conceptual framework to analyze and understand AI-induced field-change.
The introduction of novel AI-agents into new or existing fields creates a dynamic in which algorithms (re)shape organizations and institutions.
The institutional infrastructure surrounding AI-induced fields is generally little elaborated, which could be an obstacle to the broader institutionalization of AI-systems going forward.
arXiv Detail & Related papers (2021-08-18T14:06:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.