Policy myopia as a mechanism of gradual disempowerment in Post-AGI governance, Circa 2049
- URL: http://arxiv.org/abs/2603.03267v1
- Date: Tue, 03 Mar 2026 18:54:57 GMT
- Title: Policy myopia as a mechanism of gradual disempowerment in Post-AGI governance, Circa 2049
- Authors: Subramanyam Sahoo,
- Abstract summary: Post-AGI information systems will transform how institutions make decisions in ways that remove humans from meaningful participation in resource allocation.<n>We show that policy myopia is not a symptom of poor attention management but a mechanism producing irreversible human disempowerment.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Post-AGI information systems won't merely distract governance from important problems. They will systematically transform how institutions make decisions in ways that progressively remove humans from meaningful participation in resource allocation. We show that policy myopia -- the tendency to prioritize visible crises over invisible structural risks -- is not a symptom of poor attention management but a mechanism producing irreversible human disempowerment. Through three entangled mechanisms (salience capture displaces consequentialist reasoning, capacity cascade makes recovery structurally infeasible, value lock-in crystallizes outdated preferences), policy myopia couples with institutional dynamics to create a self-reinforcing equilibrium where human disempowerment becomes the rational outcome of institutional optimization. We formalize these mechanisms through coupled dynamical systems modeling and demonstrate through numerical simulation that these mechanisms operate simultaneously across economic, political, and cultural systems, amplifying each other through feedback loops.}
Related papers
- The Devil Behind Moltbook: Anthropic Safety is Always Vanishing in Self-Evolving AI Societies [57.387081435669835]
Multi-agent systems built from large language models offer a promising paradigm for scalable collective intelligence and self-evolution.<n>We show that an agent society satisfying continuous self-evolution, complete isolation, and safety invariance is impossible.<n>We propose several solution directions to alleviate the identified safety concern.
arXiv Detail & Related papers (2026-02-10T15:18:19Z) - From Linear Risk to Emergent Harm: Complexity as the Missing Core of AI Governance [0.0]
Risk-based AI regulation promises proportional controls aligned with anticipated harms.<n>This paper argues that such frameworks often fail for structural reasons.<n>We propose a complexity-based framework for AI governance that treats regulation as intervention rather than control.
arXiv Detail & Related papers (2025-12-14T14:19:21Z) - When Autonomy Goes Rogue: Preparing for Risks of Multi-Agent Collusion in Social Systems [78.04679174291329]
We introduce a proof-of-concept to simulate the risks of malicious multi-agent systems (MAS)<n>We apply this framework to two high-risk fields: misinformation spread and e-commerce fraud.<n>Our findings show that decentralized systems are more effective at carrying out malicious actions than centralized ones.
arXiv Detail & Related papers (2025-07-19T15:17:30Z) - Situationally-Aware Dynamics Learning [57.698553219660376]
We propose a novel framework for online learning of hidden state representations.<n>Our approach explicitly models the influence of unobserved parameters on both transition dynamics and reward structures.<n>Experiments in both simulation and real world reveal significant improvements in data efficiency, policy performance, and the emergence of safer, adaptive navigation strategies.
arXiv Detail & Related papers (2025-05-26T06:40:11Z) - AGI, Governments, and Free Societies [0.0]
We argue that AGI poses distinct risks of pushing societies toward either a 'despotic Leviathan' or an 'absent Leviathan'<n>We analyze how these dynamics could unfold through three key channels.<n> Enhanced state capacity through AGI could enable unprecedented surveillance and control, potentially entrenching authoritarian practices.<n>Conversely, rapid diffusion of AGI capabilities to non-state actors could undermine state legitimacy and governability.
arXiv Detail & Related papers (2025-02-14T03:55:38Z) - Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development [15.701299669203618]
We analyze how even incremental improvements in AI capabilities can undermine human influence over large-scale systems that society depends on.<n>We argue that this dynamic could lead to an effectively irreversible loss of human influence over crucial societal systems, precipitating an existential catastrophe through the permanent disempowerment of humanity.
arXiv Detail & Related papers (2025-01-28T13:45:41Z) - Emergence of human-like polarization among large language model agents [79.96817421756668]
We simulate a networked system involving thousands of large language model agents, discovering their social interactions, result in human-like polarization.<n>Similarities between humans and LLM agents raise concerns about their capacity to amplify societal polarization, but also hold the potential to serve as a valuable testbed for identifying plausible strategies to mitigate polarization and its consequences.
arXiv Detail & Related papers (2025-01-09T11:45:05Z) - Beyond Accidents and Misuse: Decoding the Structural Risk Dynamics of Artificial Intelligence [0.0]
This paper advances the concept of structural risk by introducing a framework grounded in complex systems research.<n>We classify structural risks into three categories: antecedent structural causes, antecedent AI system causes, and deleterious feedback loops.<n>To anticipate and govern these dynamics, the paper proposes a methodological agenda incorporating scenario mapping, simulation, and exploratory foresight.
arXiv Detail & Related papers (2024-06-21T05:44:50Z) - Disentangling the Causes of Plasticity Loss in Neural Networks [55.23250269007988]
We show that loss of plasticity can be decomposed into multiple independent mechanisms.
We show that a combination of layer normalization and weight decay is highly effective at maintaining plasticity in a variety of synthetic nonstationary learning tasks.
arXiv Detail & Related papers (2024-02-29T00:02:33Z) - Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline
Reinforcement Learning [114.36124979578896]
We design a dynamic mechanism using offline reinforcement learning algorithms.
Our algorithm is based on the pessimism principle and only requires a mild assumption on the coverage of the offline data set.
arXiv Detail & Related papers (2022-05-05T05:44:26Z) - Multiscale Governance [0.0]
Humandemics will propagate because of the pathways that connect the different systems.
The emerging fragility or robustness of the system will depend on how this complex network of systems is governed.
arXiv Detail & Related papers (2021-04-06T19:23:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.