Cognitive Castes: Artificial Intelligence, Epistemic Stratification, and the Dissolution of Democratic Discourse
- URL: http://arxiv.org/abs/2507.14218v1
- Date: Wed, 16 Jul 2025 08:46:45 GMT
- Title: Cognitive Castes: Artificial Intelligence, Epistemic Stratification, and the Dissolution of Democratic Discourse
- Authors: Craig S Wright,
- Abstract summary: The argument traces how contemporary AI systems amplify the reasoning capacity of individuals equipped with abstraction, symbolic logic, and adversarial interrogation.<n>The proposed response is not technocratic regulation, nor universal access, but the reconstruction of rational autonomy as a civic mandate.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Artificial intelligence functions not as an epistemic leveller, but as an accelerant of cognitive stratification, entrenching and formalising informational castes within liberal-democratic societies. Synthesising formal epistemology, political theory, algorithmic architecture, and economic incentive structures, the argument traces how contemporary AI systems selectively amplify the reasoning capacity of individuals equipped with recursive abstraction, symbolic logic, and adversarial interrogation, whilst simultaneously pacifying the cognitively untrained through engagement-optimised interfaces. Fluency replaces rigour, immediacy displaces reflection, and procedural reasoning is eclipsed by reactive suggestion. The result is a technocratic realignment of power: no longer grounded in material capital alone, but in the capacity to navigate, deconstruct, and manipulate systems of epistemic production. Information ceases to be a commons; it becomes the substrate through which consent is manufactured and autonomy subdued. Deliberative democracy collapses not through censorship, but through the erosion of interpretive agency. The proposed response is not technocratic regulation, nor universal access, but the reconstruction of rational autonomy as a civic mandate, codified in education, protected by epistemic rights, and structurally embedded within open cognitive infrastructure.
Related papers
- Epistemic Scarcity: The Economics of Unresolvable Unknowns [0.0]
We argue that AI systems are incapable of performing the core functions of economic coordination.<n>We critique dominant ethical AI frameworks as extensions of constructivist rationalism.
arXiv Detail & Related papers (2025-07-02T08:46:24Z) - Resource Rational Contractualism Should Guide AI Alignment [69.07915246220985]
Contractualist alignment proposes grounding decisions in agreements that diverse stakeholders would endorse.<n>We propose Resource-Rationalism: a framework where AI systems approximate the agreements rational parties would form.<n>An RRC-aligned agent would not only operate efficiently, but also be equipped to dynamically adapt to and interpret the ever-changing human social world.
arXiv Detail & Related papers (2025-06-20T18:57:13Z) - Modal Logic for Stratified Becoming: Actualization Beyond Possible Worlds [55.2480439325792]
This article develops a novel framework for modal logic based on the idea of stratified actualization.<n>Traditional Kripke semantics treat modal operators as quantification over fully determinate alternatives.<n>We propose a system Stratified Actualization Logic (SAL) in which modalities are indexed by levels of ontological stability, interpreted as admissibility.
arXiv Detail & Related papers (2025-06-12T18:35:01Z) - Rational Superautotrophic Diplomacy (SupraAD); A Conceptual Framework for Alignment Based on Interdisciplinary Findings on the Fundamentals of Cognition [0.0]
Rational Superautotrophic Diplomacy (SupraAD) is a theoretical, interdisciplinary conceptual framework for alignment.<n>It draws on cognitive systems analysis and instrumental rationality modeling.<n>SupraAD reframes alignment as a challenge that predates AI, afflicting all sufficiently complex, coadapting intelligences.
arXiv Detail & Related papers (2025-06-03T17:28:25Z) - How Malicious AI Swarms Can Threaten Democracy [42.60750455396757]
Malicious AI swarms can coordinate covertly, infiltrate communities, evade traditional detectors, and run continuous A/B tests.<n>The result can include fabricated grassroots consensus, fragmented shared reality, mass harassment, voter micro-suppression or mobilization.<n>We urge a three-pronged response: always-on swarm-detection dashboards, pre-election high-fidelity swarm-simulation stress-tests, transparency audits, and optional client-side "AI shields" for users.
arXiv Detail & Related papers (2025-05-18T13:33:37Z) - Neural Brain: A Neuroscience-inspired Framework for Embodied Agents [58.58177409853298]
Current AI systems, such as large language models, remain disembodied, unable to physically engage with the world.<n>At the core of this challenge lies the concept of Neural Brain, a central intelligence system designed to drive embodied agents with human-like adaptability.<n>This paper introduces a unified framework for the Neural Brain of embodied agents, addressing two fundamental challenges.
arXiv Detail & Related papers (2025-05-12T15:05:34Z) - Artificial Intelligence in Government: Why People Feel They Lose Control [44.99833362998488]
The use of Artificial Intelligence in public administration is expanding rapidly.<n>While AI promises greater efficiency and responsiveness, its integration into government functions raises concerns about fairness, transparency, and accountability.<n>This article applies principal-agent theory to AI adoption as a special case of delegation.
arXiv Detail & Related papers (2025-05-02T07:46:41Z) - Cognitive Silicon: An Architectural Blueprint for Post-Industrial Computing Systems [0.0]
This paper presents a hypothetical full-stack architectural framework projected toward 2035, exploring a possible trajectory for cognitive computing system design.<n>The proposed architecture would integrate symbolic scaffolding, governed memory, runtime moral coherence, and alignment-aware execution across silicon-to-semantics layers.
arXiv Detail & Related papers (2025-04-23T11:24:30Z) - Authoritarian Recursions: How Fiction, History, and AI Reinforce Control in Education, Warfare, and Discourse [0.0]
Article theorizes how artificial intelligence systems consolidate institutional control across education, military operations, and digital discourse.<n>Analyses how intelligent systems normalize hierarchy under the guise of efficiency and neutrality.<n>Case studies include automated proctoring in education, autonomous targeting in warfare, and algorithmic curation on social platforms.
arXiv Detail & Related papers (2025-04-12T01:01:26Z) - A taxonomy of epistemic injustice in the context of AI and the case for generative hermeneutical erasure [0.0]
Epistemic injustice related to AI is a growing concern.<n>In relation to machine learning models, injustice can have a diverse range of sources.<n>I argue that this injustice the automation of 'epistemicide', the injustice done to agents in their capacity for collective sense-making.
arXiv Detail & Related papers (2025-04-10T07:54:47Z) - Stochastic, Dynamic, Fluid Autonomy in Agentic AI: Implications for Authorship, Inventorship, and Liability [0.2209921757303168]
Agentic AI systems autonomously pursue goals, adapting strategies through implicit learning.<n>Human and machine contributions become irreducibly entangled in intertwined creative processes.<n>We argue that legal and policy frameworks may need to treat human and machine contributions as functionally equivalent.
arXiv Detail & Related papers (2025-04-05T04:44:59Z) - Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems [132.77459963706437]
This book provides a comprehensive overview, framing intelligent agents within modular, brain-inspired architectures.<n>It explores self-enhancement and adaptive evolution mechanisms, exploring how agents autonomously refine their capabilities.<n>It also examines the collective intelligence emerging from agent interactions, cooperation, and societal structures.
arXiv Detail & Related papers (2025-03-31T18:00:29Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We examine what is known about human wisdom and sketch a vision of its AI counterpart.<n>We argue that AI systems particularly struggle with metacognition.<n>We discuss how wise AI might be benchmarked, trained, and implemented.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - A Semantic Approach to Decidability in Epistemic Planning (Extended
Version) [72.77805489645604]
We use a novel semantic approach to achieve decidability.
Specifically, we augment the logic of knowledge S5$_n$ and with an interaction axiom called (knowledge) commutativity.
We prove that our framework admits a finitary non-fixpoint characterization of common knowledge, which is of independent interest.
arXiv Detail & Related papers (2023-07-28T11:26:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.