Authoritarian Recursions: How Fiction, History, and AI Reinforce Control in Education, Warfare, and Discourse
- URL: http://arxiv.org/abs/2504.09030v1
- Date: Sat, 12 Apr 2025 01:01:26 GMT
- Title: Authoritarian Recursions: How Fiction, History, and AI Reinforce Control in Education, Warfare, and Discourse
- Authors: Hasan Oguz,
- Abstract summary: This study employs a mixed-methods approach to examine how AI technologies may reproduce structures of authoritarian control.<n>The study identifies recurring patterns of harm, including unchecked autonomy, algorithmic opacity, surveillance normalization, and the amplification of structural bias.<n>The findings call for a holistic ethical framework that integrates lessons from history, critical social theory, and technical design.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing integration of artificial intelligence (AI) into military, educational, and propaganda systems raises urgent ethical challenges related to autonomy, bias, and the erosion of human oversight. This study employs a mixed-methods approach -- combining historical analysis, speculative fiction critique, and contemporary case studies -- to examine how AI technologies may reproduce structures of authoritarian control. Drawing parallels between Nazi-era indoctrination systems, the fictional Skynet AI from \textit{The Terminator}, and present-day deployments of AI in classrooms, battlefields, and digital media, the study identifies recurring patterns of harm. These include unchecked autonomy, algorithmic opacity, surveillance normalization, and the amplification of structural bias. In military contexts, lethal autonomous weapons systems (LAWS) undermine accountability and challenge compliance with international humanitarian law. In education, AI-driven learning platforms and surveillance technologies risk reinforcing ideological conformity and suppressing intellectual agency. Meanwhile, AI-powered propaganda systems increasingly manipulate public discourse through targeted content curation and disinformation. The findings call for a holistic ethical framework that integrates lessons from history, critical social theory, and technical design. To mitigate recursive authoritarian risks, the study advocates for robust human-in-the-loop architectures, algorithmic transparency, participatory governance, and the integration of critical AI literacy into policy and pedagogy.
Related papers
- The Philosophic Turn for AI Agents: Replacing centralized digital rhetoric with decentralized truth-seeking [0.0]
In the face of AI technology, individuals will increasingly rely on AI agents to navigate life's growing complexities.
This paper addresses a fundamental dilemma posed by AI decision-support systems: the risk of either becoming overwhelmed by complex decisions, or having autonomy compromised.
arXiv Detail & Related papers (2025-04-24T19:34:43Z) - Decoding the Black Box: Integrating Moral Imagination with Technical AI Governance [0.0]
We develop a comprehensive framework designed to regulate AI technologies deployed in high-stakes domains such as defense, finance, healthcare, and education.<n>Our approach combines rigorous technical analysis, quantitative risk assessment, and normative evaluation to expose systemic vulnerabilities.
arXiv Detail & Related papers (2025-03-09T03:11:32Z) - Alignment, Agency and Autonomy in Frontier AI: A Systems Engineering Perspective [0.0]
Concepts of alignment, agency, and autonomy have become central to AI safety, governance, and control.<n>This paper traces the historical, philosophical, and technical evolution of these concepts, emphasizing how their definitions influence AI development, deployment, and oversight.
arXiv Detail & Related papers (2025-02-20T21:37:20Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Reconfiguring Participatory Design to Resist AI Realism [1.0878040851638]
This paper argues that participatory design can play a role in questioning and resisting AI Realism.
I examine three concerning aspects of AI Realism: the facade of democratization that lacks true empowerment, demands for human adaptability, and the obfuscation of essential human labor enabling the AI system.
I propose resisting AI Realism by reconfiguring PD to continue engaging with value-centered visions, increasing its exploration of non-AI alternatives, and making the essential human labor underpinning AI systems visible.
arXiv Detail & Related papers (2024-06-05T13:21:46Z) - Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models [7.835719708227145]
Deepfakes and the spread of m/disinformation have emerged as formidable threats to the integrity of information ecosystems worldwide.
We highlight the mechanisms through which generative AI based on large models (LM-based GenAI) craft seemingly convincing yet fabricated contents.
We introduce an integrated framework that combines advanced detection algorithms, cross-platform collaboration, and policy-driven initiatives.
arXiv Detail & Related papers (2023-11-29T06:47:58Z) - AI Alignment: A Comprehensive Survey [69.61425542486275]
AI alignment aims to make AI systems behave in line with human intentions and values.<n>We identify four principles as the key objectives of AI alignment: Robustness, Interpretability, Controllability, and Ethicality.<n>We decompose current alignment research into two key components: forward alignment and backward alignment.
arXiv Detail & Related papers (2023-10-30T15:52:15Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.