The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge
- URL: http://arxiv.org/abs/2512.19570v1
- Date: Mon, 22 Dec 2025 16:52:37 GMT
- Title: The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge
- Authors: Angjelin Hila,
- Abstract summary: We develop a theory of rationality as distributed across human collectives, using dual process theory as background.<n>We distinguish internalist justification, defined as reflective understanding of why a proposition is true, from externalist justification, defined as reliable transmission of truths.<n>We argue that LLMs approximate externalist reliabilism because they can reliably transmit information whose justificatory basis is established elsewhere, but they do not themselves possess reflective justification.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We examine epistemological threats posed by human and LLM interaction. We develop collective epistemology as a theory of epistemic warrant distributed across human collectives, using bounded rationality and dual process theory as background. We distinguish internalist justification, defined as reflective understanding of why a proposition is true, from externalist justification, defined as reliable transmission of truths. Both are necessary for collective rationality, but only internalist justification produces reflective knowledge. We specify reflective knowledge as follows: agents understand the evaluative basis of a claim, when that basis is unavailable agents consistently assess the reliability of truth sources, and agents have a duty to apply these standards within their domains of competence. We argue that LLMs approximate externalist reliabilism because they can reliably transmit information whose justificatory basis is established elsewhere, but they do not themselves possess reflective justification. Widespread outsourcing of reflective work to reliable LLM outputs can weaken reflective standards of justification, disincentivize comprehension, and reduce agents' capacity to meet professional and civic epistemic duties. To mitigate these risks, we propose a three tier norm program that includes an epistemic interaction model for individual use, institutional and organizational frameworks that seed and enforce norms for epistemically optimal outcomes, and deontic constraints at organizational and or legislative levels that instantiate discursive norms and curb epistemic vices.
Related papers
- Mirror: A Multi-Agent System for AI-Assisted Ethics Review [104.3684024153469]
Mirror is an agentic framework for AI-assisted ethical review.<n>It integrates ethical reasoning, structured rule interpretation, and multi-agent deliberation within a unified architecture.
arXiv Detail & Related papers (2026-02-09T03:38:55Z) - From Bias Mitigation to Bias Negotiation: Governing Identity and Sociocultural Reasoning in Generative AI [0.0]
LLMs act in the social world by drawing upon shared cultural patterns to make social situations understandable and actionable.<n>Identity is often part of the inferential substrate of competent judgment.<n> dominant governance regime for identity-related harm remains bias mitigation.
arXiv Detail & Related papers (2026-02-05T21:20:10Z) - The MEVIR Framework: A Virtue-Informed Moral-Epistemic Model of Human Trust Decisions [0.0]
This report introduces the Moral-Epistemic VIRtue informed (MEVIR) framework.<n>Central to the framework are ontological concepts - Truth Bearers, Truth Makers, and Ontological Unpacking.<n>Report analyzes how propaganda, psychological operations, and echo chambers exploit the MEVIR process.
arXiv Detail & Related papers (2025-12-02T01:11:35Z) - Beyond Prompt-Induced Lies: Investigating LLM Deception on Benign Prompts [79.1081247754018]
Large Language Models (LLMs) are widely deployed in reasoning, planning, and decision-making tasks.<n>We propose a framework based on Contact Searching Questions(CSQ) to quantify the likelihood of deception.
arXiv Detail & Related papers (2025-08-08T14:46:35Z) - Toward a Theory of Agents as Tool-Use Decision-Makers [89.26889709510242]
We argue that true autonomy requires agents to be grounded in a coherent epistemic framework that governs what they know, what they need to know, and how to acquire that knowledge efficiently.<n>We propose a unified theory that treats internal reasoning and external actions as equivalent epistemic tools, enabling agents to systematically coordinate introspection and interaction.<n>This perspective shifts the design of agents from mere action executors to knowledge-driven intelligence systems, offering a principled path toward building foundation agents capable of adaptive, efficient, and goal-directed behavior.
arXiv Detail & Related papers (2025-06-01T07:52:16Z) - Language Models Surface the Unwritten Code of Science and Society [1.6245906033871593]
This paper calls on the research community to investigate how human biases are inherited by large language models (LLMs)<n>We introduce a conceptual framework through a case study in science: uncovering hidden rules in peer review.
arXiv Detail & Related papers (2025-05-25T02:28:40Z) - Failure Modes of LLMs for Causal Reasoning on Narratives [51.19592551510628]
We investigate the interaction between world knowledge and logical reasoning.<n>We find that state-of-the-art large language models (LLMs) often rely on superficial generalizations.<n>We show that simple reformulations of the task can elicit more robust reasoning behavior.
arXiv Detail & Related papers (2024-10-31T12:48:58Z) - TRACE: TRansformer-based Attribution using Contrastive Embeddings in LLMs [50.259001311894295]
We propose a novel TRansformer-based Attribution framework using Contrastive Embeddings called TRACE.
We show that TRACE significantly improves the ability to attribute sources accurately, making it a valuable tool for enhancing the reliability and trustworthiness of large language models.
arXiv Detail & Related papers (2024-07-06T07:19:30Z) - Towards Logically Consistent Language Models via Probabilistic Reasoning [14.317886666902822]
Large language models (LLMs) are a promising venue for natural language understanding and generation tasks.
LLMs are prone to generate non-factual information and to contradict themselves when prompted to reason about beliefs of the world.
We introduce a training objective that teaches a LLM to be consistent with external knowledge in the form of a set of facts and rules.
arXiv Detail & Related papers (2024-04-19T12:23:57Z) - Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge Reasoning via Promoting Causal Consistency in LLMs [55.66353783572259]
Causal-Consistency Chain-of-Thought harnesses multi-agent collaboration to bolster the faithfulness and causality of foundation models.<n>Our framework demonstrates significant superiority over state-of-the-art methods through extensive and comprehensive evaluations.
arXiv Detail & Related papers (2023-08-23T04:59:21Z) - Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense
Reasoning [85.1541170468617]
This paper reconsiders the nature of commonsense reasoning and proposes a novel commonsense reasoning metric, Non-Replacement Confidence (NRC)
Our proposed novel method boosts zero-shot performance on two commonsense reasoning benchmark datasets and further seven commonsense question-answering datasets.
arXiv Detail & Related papers (2022-08-23T14:42:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.