Dial E for Ethical Enforcement: institutional VETO power as a governance primitive
- URL: http://arxiv.org/abs/2603.00617v1
- Date: Sat, 28 Feb 2026 12:18:55 GMT
- Title: Dial E for Ethical Enforcement: institutional VETO power as a governance primitive
- Authors: Subramanyam Sahoo, Vinija Jain, Aman Chadha, Divya Chaudhary,
- Abstract summary: The paper argues that communities most vulnerable to military uses must lead governance design.<n>It argues that institutional veto power is a prerequisite for converting symbolic safeguards into enforceable responsibility.
- Score: 16.505918019260964
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The persistent militarization of large reasoning models stems not from technical necessity but from governance arrangements that strip researchers of meaningful authority to refuse harmful transfers and deployments. Existing accountability mechanisms such as model cards and responsible AI statements operate as reputational signals detached from decision making architecture. We identify institutional veto power as a missing governance primitive: a formal authority to halt subsequent use or distribution of research when credible risks of weaponization emerge. Drawing on precedents in nuclear nonproliferation and biomedical ethics, the paper maps unprotected veto points across the research lifecycle, diagnose why compliance without enforceable constraints fails, and offer concrete institutional designs that embed veto authority while reducing the risk of political capture. The paper argues that communities most vulnerable to military uses must lead governance design, and that institutional veto power is a prerequisite for converting symbolic safeguards into enforceable responsibility and for achieving meaningful model disarmament.
Related papers
- Administrative Law's Fourth Settlement: AI and the Capability-Accountability Trap [0.0]
Since 1887, administrative law has navigated a "capability-accountability trap"<n>This Article proposes three doctrinal innovations within administrative law to realize this potential.
arXiv Detail & Related papers (2026-02-10T11:36:01Z) - Mirror: A Multi-Agent System for AI-Assisted Ethics Review [104.3684024153469]
Mirror is an agentic framework for AI-assisted ethical review.<n>It integrates ethical reasoning, structured rule interpretation, and multi-agent deliberation within a unified architecture.
arXiv Detail & Related papers (2026-02-09T03:38:55Z) - Democracy-in-Silico: Institutional Design as Alignment in AI-Governed Polities [2.1485350418225244]
Democracy-in-Silico is an agent-based simulation where societies of advanced AI agents govern themselves under different institutional frameworks.<n>We explore what it means to be human in an age of AI by tasking Large Language Models (LLMs) to embody agents with traumatic memories, hidden agendas, and psychological triggers.<n>We present a novel metric, the Power-Preservation Index (PPI), to quantify misaligned behavior where agents prioritize their own power over public welfare.
arXiv Detail & Related papers (2025-08-27T04:44:41Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - A Moral Agency Framework for Legitimate Integration of AI in Bureaucracies [0.0]
Public-sector bureaucracies seek to reap the benefits of artificial intelligence (AI)<n>We present a three-point Moral Agency Framework for legitimate integration of AI in bureaucratic structures.
arXiv Detail & Related papers (2025-08-11T17:49:19Z) - Toward a Global Regime for Compute Governance: Building the Pause Button [0.4952055253916912]
We propose a governance system designed to prevent AI systems from being trained by restricting access to computational resources.<n>We identify three key intervention points -- technical, traceability, and regulatory -- and organize them within a Governance--Enforcement--Verification framework.<n> Technical mechanisms include tamper-proof FLOP caps, model locking, and offline licensing.
arXiv Detail & Related papers (2025-06-25T15:18:19Z) - Toward a Theory of Agents as Tool-Use Decision-Makers [89.26889709510242]
We argue that true autonomy requires agents to be grounded in a coherent epistemic framework that governs what they know, what they need to know, and how to acquire that knowledge efficiently.<n>We propose a unified theory that treats internal reasoning and external actions as equivalent epistemic tools, enabling agents to systematically coordinate introspection and interaction.<n>This perspective shifts the design of agents from mere action executors to knowledge-driven intelligence systems, offering a principled path toward building foundation agents capable of adaptive, efficient, and goal-directed behavior.
arXiv Detail & Related papers (2025-06-01T07:52:16Z) - Watermarking Without Standards Is Not AI Governance [46.71493672772134]
We argue that current implementations risk serving as symbolic compliance rather than delivering effective oversight.<n>We propose a three-layer framework encompassing technical standards, audit infrastructure, and enforcement mechanisms.
arXiv Detail & Related papers (2025-05-27T18:10:04Z) - Artificial Intelligence in Government: Why People Feel They Lose Control [44.99833362998488]
The use of Artificial Intelligence in public administration is expanding rapidly.<n>While AI promises greater efficiency and responsiveness, its integration into government functions raises concerns about fairness, transparency, and accountability.<n>This article applies principal-agent theory to AI adoption as a special case of delegation.
arXiv Detail & Related papers (2025-05-02T07:46:41Z) - Reinsuring AI: Energy, Agriculture, Finance & Medicine as Precedents for Scalable Governance of Frontier Artificial Intelligence [0.0]
This paper proposes a novel framework for governing such high-stakes models through a three-tiered insurance architecture.<n>It shows how the federal government can stabilize private AI insurance markets without resorting to brittle regulation or predictive licensing regimes.
arXiv Detail & Related papers (2025-04-02T21:02:19Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - AGI, Governments, and Free Societies [0.0]
We argue that AGI poses distinct risks of pushing societies toward either a 'despotic Leviathan' or an 'absent Leviathan'<n>We analyze how these dynamics could unfold through three key channels.<n> Enhanced state capacity through AGI could enable unprecedented surveillance and control, potentially entrenching authoritarian practices.<n>Conversely, rapid diffusion of AGI capabilities to non-state actors could undermine state legitimacy and governability.
arXiv Detail & Related papers (2025-02-14T03:55:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.