Authoritarian Recursions: How Fiction, History, and AI Reinforce Control in Education, Warfare, and Discourse
- URL: http://arxiv.org/abs/2504.09030v3
- Date: Sat, 07 Jun 2025 21:39:07 GMT
- Title: Authoritarian Recursions: How Fiction, History, and AI Reinforce Control in Education, Warfare, and Discourse
- Authors: Hasan Oguz,
- Abstract summary: Article theorizes how artificial intelligence systems consolidate institutional control across education, military operations, and digital discourse.<n>Analyses how intelligent systems normalize hierarchy under the guise of efficiency and neutrality.<n>Case studies include automated proctoring in education, autonomous targeting in warfare, and algorithmic curation on social platforms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article develops the concept of authoritarian recursion to theorize how artificial intelligence (AI) systems consolidate institutional control across education, military operations, and digital discourse. Rather than treating these domains in isolation, it identifies a shared recursive architecture in which algorithmic systems mediate judgment, obscure accountability, and reshape the conditions of moral and epistemic agency. Grounded in critical discourse analysis and sociotechnical ethics, the paper synthesizes historical precedent, cultural narrative, and contemporary deployment to examine how intelligent systems normalize hierarchy under the guise of efficiency and neutrality. Case studies include automated proctoring in education, autonomous targeting in warfare, and algorithmic curation on social platforms. Cultural imaginaries such as Orwell's *Nineteen Eighty-Four*, *The Terminator*'s Skynet, and *Black Mirror* are treated as heuristic devices that illuminate public anxieties and design assumptions embedded in technological systems. The analysis integrates frameworks from the Fairness, Accountability, and Transparency (FAccT) paradigm, relational ethics, and data justice theory to explore the normative implications of predictive infrastructures. It argues that recursive control operates through moral outsourcing, behavioral normalization, and epistemic closure. By reframing AI not as a neutral tool but as a communicative and institutional infrastructure, the article highlights the need for ethical orientations that prioritize democratic refusal, epistemic plurality, and responsible design in the governance of intelligent systems.
Related papers
- Epistemic Scarcity: The Economics of Unresolvable Unknowns [0.0]
We argue that AI systems are incapable of performing the core functions of economic coordination.<n>We critique dominant ethical AI frameworks as extensions of constructivist rationalism.
arXiv Detail & Related papers (2025-07-02T08:46:24Z) - Ethical AI: Towards Defining a Collective Evaluation Framework [0.3413711585591077]
Artificial Intelligence (AI) is transforming sectors such as healthcare, finance, and autonomous systems.<n>Yet its rapid integration raises urgent ethical concerns related to data ownership, privacy, and systemic bias.<n>This article proposes a modular ethical assessment framework built on ontological blocks of meaning-discrete, interpretable units.
arXiv Detail & Related papers (2025-05-30T21:10:47Z) - The Philosophic Turn for AI Agents: Replacing centralized digital rhetoric with decentralized truth-seeking [0.0]
In the face of AI technology, individuals will increasingly rely on AI agents to navigate life's growing complexities.
This paper addresses a fundamental dilemma posed by AI decision-support systems: the risk of either becoming overwhelmed by complex decisions, or having autonomy compromised.
arXiv Detail & Related papers (2025-04-24T19:34:43Z) - Naming is framing: How cybersecurity's language problems are repeating in AI governance [0.0]
This paper argues that misnomers like cybersecurity and artificial intelligence (AI) are more than semantic quirks.<n>It argues that these misnomers carry significant governance risks by obscuring human agency, inflating expectations, and distorting accountability.<n>The paper advocates for a language-first approach to AI governance: one that interrogates dominant metaphors, foregrounds human roles, and co-develops a lexicon that is precise, inclusive, and reflexive.
arXiv Detail & Related papers (2025-04-16T20:58:26Z) - Decoding the Black Box: Integrating Moral Imagination with Technical AI Governance [0.0]
We develop a comprehensive framework designed to regulate AI technologies deployed in high-stakes domains such as defense, finance, healthcare, and education.<n>Our approach combines rigorous technical analysis, quantitative risk assessment, and normative evaluation to expose systemic vulnerabilities.
arXiv Detail & Related papers (2025-03-09T03:11:32Z) - Alignment, Agency and Autonomy in Frontier AI: A Systems Engineering Perspective [0.0]
Concepts of alignment, agency, and autonomy have become central to AI safety, governance, and control.<n>This paper traces the historical, philosophical, and technical evolution of these concepts, emphasizing how their definitions influence AI development, deployment, and oversight.
arXiv Detail & Related papers (2025-02-20T21:37:20Z) - Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground [55.2480439325792]
I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms.<n>I question the current narrow prioritization in AI ethics of moral innovation over moral preservation.
arXiv Detail & Related papers (2024-12-06T15:36:13Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Reconfiguring Participatory Design to Resist AI Realism [1.0878040851638]
This paper argues that participatory design can play a role in questioning and resisting AI Realism.
I examine three concerning aspects of AI Realism: the facade of democratization that lacks true empowerment, demands for human adaptability, and the obfuscation of essential human labor enabling the AI system.
I propose resisting AI Realism by reconfiguring PD to continue engaging with value-centered visions, increasing its exploration of non-AI alternatives, and making the essential human labor underpinning AI systems visible.
arXiv Detail & Related papers (2024-06-05T13:21:46Z) - Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models [7.835719708227145]
Deepfakes and the spread of m/disinformation have emerged as formidable threats to the integrity of information ecosystems worldwide.
We highlight the mechanisms through which generative AI based on large models (LM-based GenAI) craft seemingly convincing yet fabricated contents.
We introduce an integrated framework that combines advanced detection algorithms, cross-platform collaboration, and policy-driven initiatives.
arXiv Detail & Related papers (2023-11-29T06:47:58Z) - AI Alignment: A Comprehensive Survey [69.61425542486275]
AI alignment aims to make AI systems behave in line with human intentions and values.<n>We identify four principles as the key objectives of AI alignment: Robustness, Interpretability, Controllability, and Ethicality.<n>We decompose current alignment research into two key components: forward alignment and backward alignment.
arXiv Detail & Related papers (2023-10-30T15:52:15Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Descriptive AI Ethics: Collecting and Understanding the Public Opinion [10.26464021472619]
This work proposes a mixed AI ethics model that allows normative and descriptive research to complement each other.
We discuss its implications on bridging the gap between optimistic and pessimistic views towards AI systems' deployment.
arXiv Detail & Related papers (2021-01-15T03:46:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.