How should AI knowledge be governed? Epistemic authority, structural transparency, and the case for open cognitive graphs
- URL: http://arxiv.org/abs/2602.16949v2
- Date: Fri, 20 Feb 2026 21:46:11 GMT
- Title: How should AI knowledge be governed? Epistemic authority, structural transparency, and the case for open cognitive graphs
- Authors: Chao Li, Chunyi Zhao, Yuru Wang, Yi Hu,
- Abstract summary: This paper reconceptualizes educational AI as public educational cognitive infrastructure.<n>By shifting attention from access to governance conditions, the proposed framework offers a structural approach to aligning educational AI with democratic accountability and public responsibility.
- Score: 10.759885069667066
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Through widespread use in formative assessment and self-directed learning, educational AI systems exercise de facto epistemic authority. Unlike human educators, however, these systems are not embedded in institutional mechanisms of accountability, review, and correction, creating a structural governance challenge that cannot be resolved through application-level regulation or model transparency alone. This paper reconceptualizes educational AI as public educational cognitive infrastructure and argues that its governance must address the epistemic authority such systems exert. We propose the Open Cognitive Graph (OCG) as a technical interface that externalizes pedagogical structure in forms aligned with human educational reasoning. By explicitly representing concepts, prerequisite relations, misconceptions, and scaffolding, OCGs make the cognitive logic governing AI behaviour inspectable and revisable. Building on this foundation, we introduce the trunk-branch governance model, which organizes epistemic authority across layers of consensus and pluralism. A case study of a community-governed educational foundation model demonstrates how distributed expertise can be integrated through institutionalized processes of validation, correction, and propagation. The paper concludes by discussing implications for educational equity, AI policy, and sustainability. By shifting attention from access to governance conditions, the proposed framework offers a structural approach to aligning educational AI with democratic accountability and public responsibility.
Related papers
- The Social Responsibility Stack: A Control-Theoretic Architecture for Governing Socio-Technical AI [0.0]
This paper introduces the Social Responsibility Stack (SRS), a framework that embeds societal values into AI systems.<n>We show how SRS translates normative objectives into actionable engineering and operational controls.<n>The framework bridges ethics, control theory, and AI governance, providing a practical foundation for accountable, adaptive, and auditable socio-technical AI systems.
arXiv Detail & Related papers (2025-12-18T18:42:16Z) - From Educational Analytics to AI Governance: Transferable Lessons from Complex Systems Interventions [0.0]
We argue that five core principles developed within CAPIRE transfer directly to the challenge of governing AI systems.<n>The isomorphism is not merely analogical: both domains exhibit non-linearity, emergence, feedback loops, strategic adaptation, and path dependence.<n>We propose Complex Systems AI Governance (CSAIG) as an integrated framework that operationalises these principles for regulatory design.
arXiv Detail & Related papers (2025-12-15T12:16:57Z) - Toward Effective AI Governance: A Review of Principles [2.5411385112104448]
The aim of this study is to identify which frameworks, principles, mechanisms, and stakeholder roles are emphasized in secondary literature on AI governance.<n>The most cited frameworks include the EU AI Act and NIST RMF; transparency and accountability are the most common principles.
arXiv Detail & Related papers (2025-05-29T13:07:45Z) - Authoritarian Recursions: How Fiction, History, and AI Reinforce Control in Education, Warfare, and Discourse [0.0]
Article theorizes how AI systems consolidate institutional control across education, warfare, and digital discourse.<n>Case studies are analyzed alongside cultural imaginaries such as Orwell's textitNineteen Eighty-Four, Skynet, and textitBlack Mirror, used as tools to surface ethical blind spots.
arXiv Detail & Related papers (2025-04-12T01:01:26Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - AI and the Transformation of Accountability and Discretion in Urban Governance [1.9152655229960793]
The study synthesizes insights to propose guiding principles for responsible AI integration in decision-making processes.<n>The analysis argues that AI does not simply restrict or enhance discretion but redistributes it across institutional levels.<n>It may simultaneously strengthen managerial oversight, enhance decision-making consistency, and improve operational efficiency.
arXiv Detail & Related papers (2025-02-18T18:11:39Z) - Open Problems in Technical AI Governance [102.19067750759471]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.<n>This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.