Administrative Law's Fourth Settlement: AI and the Capability-Accountability Trap
- URL: http://arxiv.org/abs/2602.09678v1
- Date: Tue, 10 Feb 2026 11:36:01 GMT
- Title: Administrative Law's Fourth Settlement: AI and the Capability-Accountability Trap
- Authors: Nicholas Caputo,
- Abstract summary: Since 1887, administrative law has navigated a "capability-accountability trap"<n>This Article proposes three doctrinal innovations within administrative law to realize this potential.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since 1887, administrative law has navigated a "capability-accountability trap": technological change forces government to become more sophisticated, but sophistication renders agencies opaque to generalist overseers like the courts and Congress. The law's response--substituting procedural review for substantive oversight--has produced a sedimentary accretion of requirements that ossify capacity without ensuring democratic control. This Article argues that the Supreme Court's post-Loper Bright retrenchment is best understood as an effort to shrink administration back to comprehensible size in response to this complexification. But reducing complexity in this way sacrifices capability precisely when climate change, pandemics, and AI risks demand more sophisticated governance. AI offers a different path. Unlike many prior administrative technologies that increased opacity alongside capacity, AI can help build "scrutability" in government, translating technical complexity into accessible terms, surfacing the assumptions that matter for oversight, and enabling substantive verification of agency reasoning. This Article proposes three doctrinal innovations within administrative law to realize this potential: a Model and System Dossier (documenting model purpose, evaluation, monitoring, and versioning) extending the administrative record to AI decision-making; a material-model-change trigger specifying when AI updates require new process; and a "deference to audit" standard that rewards agencies for auditable evaluation of their AI tools. The result is a framework for what this Article calls the "Fourth Settlement," administrative law that escapes the capability-accountability trap by preserving capability while restoring comprehensible oversight of administration.
Related papers
- The Chancellor Trap: Administrative Mediation and the Hollowing of Sovereignty in the Algorithmic Age [0.0]
In high- throughput organizations, AI-mediated decision support can reduce the probability that failures become publicly legible and politically contestable.<n>The article formalizes this dynamic as a principal-agent problem characterized by a verification gap.<n>The results are consistent with a paradox of competence: governance systems may become more effective at absorbing and resolving failures internally while simultaneously raising the threshold at which those failures become politically visible.
arXiv Detail & Related papers (2026-02-09T07:28:44Z) - Position: Human-Centric AI Requires a Minimum Viable Level of Human Understanding [26.14684888478043]
This paper argues that prevailing approaches to transparency, user control, literacy, and governance do not define the foundational understanding humans must retain for oversight under sustained AI delegation.<n>To formalize this, we define the Cognitive Integrity Threshold (CIT) as the minimum comprehension required to preserve oversight, autonomy, and accountable participation under AI assistance.
arXiv Detail & Related papers (2026-01-31T18:37:33Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Limits of Safe AI Deployment: Differentiating Oversight and Control [0.0]
"Human oversight" risk codifying vague or inconsistent interpretations of key concepts like oversight and control.<n>This paper undertakes a targeted critical review of literature on supervision outside of AI.<n>Control aims to prevent failures, while oversight focuses on detection, remediation, or incentives for future prevention.
arXiv Detail & Related papers (2025-07-04T12:22:35Z) - The AI Imperative: Scaling High-Quality Peer Review in Machine Learning [49.87236114682497]
We argue that AI-assisted peer review must become an urgent research and infrastructure priority.<n>We propose specific roles for AI in enhancing factual verification, guiding reviewer performance, assisting authors in quality improvement, and supporting ACs in decision-making.
arXiv Detail & Related papers (2025-06-09T18:37:14Z) - Watermarking Without Standards Is Not AI Governance [46.71493672772134]
We argue that current implementations risk serving as symbolic compliance rather than delivering effective oversight.<n>We propose a three-layer framework encompassing technical standards, audit infrastructure, and enforcement mechanisms.
arXiv Detail & Related papers (2025-05-27T18:10:04Z) - Artificial Intelligence in Government: Why People Feel They Lose Control [44.99833362998488]
The use of Artificial Intelligence in public administration is expanding rapidly.<n>While AI promises greater efficiency and responsiveness, its integration into government functions raises concerns about fairness, transparency, and accountability.<n>This article applies principal-agent theory to AI adoption as a special case of delegation.
arXiv Detail & Related papers (2025-05-02T07:46:41Z) - AI and the Transformation of Accountability and Discretion in Urban Governance [1.9152655229960793]
The study synthesizes insights to propose guiding principles for responsible AI integration in decision-making processes.<n>The analysis argues that AI does not simply restrict or enhance discretion but redistributes it across institutional levels.<n>It may simultaneously strengthen managerial oversight, enhance decision-making consistency, and improve operational efficiency.
arXiv Detail & Related papers (2025-02-18T18:11:39Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Open Problems in Technical AI Governance [102.19067750759471]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.<n>This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.