Will Power Return to the Clouds? From Divine Authority to GenAI Authority
- URL: http://arxiv.org/abs/2512.03076v1
- Date: Thu, 27 Nov 2025 18:59:44 GMT
- Title: Will Power Return to the Clouds? From Divine Authority to GenAI Authority
- Authors: Mohammad Saleh Torkestani, Taha Mansouri,
- Abstract summary: Generative AI systems now mediate newsfeeds, search rankings, and creative content for hundreds of millions of users.<n>This article juxtaposes the Galileo Affair, a touchstone of clerical knowledge control, with contemporary Big-Tech content moderation.
- Score: 0.2864713389096699
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative AI systems now mediate newsfeeds, search rankings, and creative content for hundreds of millions of users, positioning a handful of private firms as de-facto arbiters of truth. Drawing on a comparative-historical lens, this article juxtaposes the Galileo Affair, a touchstone of clerical knowledge control, with contemporary Big-Tech content moderation. We integrate Foucault's power/knowledge thesis, Weber's authority types (extended to a rational-technical and emerging agentic-technical modality), and Floridi's Dataism to analyze five recurrent dimensions: disciplinary power, authority modality, data pluralism, trust versus reliance, and resistance pathways. Primary sources (Inquisition records; platform transparency reports) and recent empirical studies on AI trust provide the evidentiary base. Findings show strong structural convergences: highly centralized gatekeeping, legitimacy claims couched in transcendent principles, and systematic exclusion of marginal voices. Divergences lie in temporal velocity, global scale, and the widening gap between public reliance and trust in AI systems. Ethical challenges cluster around algorithmic opacity, linguistic inequity, bias feedback loops, and synthetic misinformation. We propose a four-pillar governance blueprint: (1) a mandatory international model-registry with versioned policy logs, (2) representation quotas and regional observatories to de-center English-language hegemony, (3) mass critical-AI literacy initiatives, and (4) public-private support for community-led data trusts. Taken together, these measures aim to narrow the trust-reliance gap and prevent GenAI from hardcoding a twenty-first-century digital orthodoxy.
Related papers
- The Digital Gorilla: Rebalancing Power in the Age of AI [0.0]
Article offers a conceptual foundation for AI governance by treating such systems as a fourth societal actor.<n>It develops a Four Societal Actors framework that maps how power flows among these actors across five power modalities.<n>It advances a federalized, polycentric governance architecture and institutionalizes dynamic checks and balances.
arXiv Detail & Related papers (2026-02-23T17:46:54Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Generative AI for Autonomous Driving: Frontiers and Opportunities [145.6465312554513]
This survey delivers a comprehensive synthesis of the emerging role of GenAI across the autonomous driving stack.<n>We begin by distilling the principles and trade-offs of modern generative modeling, encompassing VAEs, GANs, Diffusion Models, and Large Language Models.<n>We categorize practical applications, such as synthetic data generalization, end-to-end driving strategies, high-fidelity digital twin systems, smart transportation networks, and cross-domain transfer to embodied AI.
arXiv Detail & Related papers (2025-05-13T17:59:20Z) - Authoritarian Recursions: How Fiction, History, and AI Reinforce Control in Education, Warfare, and Discourse [0.0]
Article theorizes how AI systems consolidate institutional control across education, warfare, and digital discourse.<n>Case studies are analyzed alongside cultural imaginaries such as Orwell's textitNineteen Eighty-Four, Skynet, and textitBlack Mirror, used as tools to surface ethical blind spots.
arXiv Detail & Related papers (2025-04-12T01:01:26Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Generative AI as Digital Media [0.0]
Generative AI is frequently portrayed as revolutionary or even apocalyptic.<n>This essay argues that such views are misguided.<n>Instead, generative AI should be understood as an evolutionary step in the broader algorithmic media landscape.
arXiv Detail & Related papers (2025-03-09T08:58:17Z) - Measuring Political Preferences in AI Systems: An Integrative Approach [0.0]
This study employs a multi-method approach to assess political bias in leading AI systems.<n>Results indicate a consistent left-leaning bias across most contemporary AI systems.<n>The presence of systematic political bias in AI systems poses risks, including reduced viewpoint diversity, increased societal polarization, and the potential for public mistrust in AI technologies.
arXiv Detail & Related papers (2025-03-04T01:40:28Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Trust, Accountability, and Autonomy in Knowledge Graph-based AI for
Self-determination [1.4305544869388402]
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making.
The integration of KGs with neuronal learning is currently a topic of active research.
This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination.
arXiv Detail & Related papers (2023-10-30T12:51:52Z) - The Role of Large Language Models in the Recognition of Territorial
Sovereignty: An Analysis of the Construction of Legitimacy [67.44950222243865]
We argue that technology tools like Google Maps and Large Language Models (LLM) are often perceived as impartial and objective.
We highlight the case of three controversial territories: Crimea, West Bank and Transnitria, by comparing the responses of ChatGPT against Wikipedia information and United Nations resolutions.
arXiv Detail & Related papers (2023-03-17T08:46:49Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.