Situated Epistemic Infrastructures: A Diagnostic Framework for Post-Coherence Knowledge
- URL: http://arxiv.org/abs/2508.04995v1
- Date: Thu, 07 Aug 2025 03:08:23 GMT
- Title: Situated Epistemic Infrastructures: A Diagnostic Framework for Post-Coherence Knowledge
- Authors: Matthew Kelly,
- Abstract summary: This paper introduces the Situated Epistemic Infrastructures framework as a diagnostic tool for analyzing how knowledge becomes authoritative under post-coherence conditions.<n>The paper contributes to debates on AI governance, knowledge production, and the ethical design of information systems by offering a robust alternative to representationalist models of scholarly communication.
- Score: 1.6724987809393428
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) such as ChatGPT have rendered visible the fragility of contemporary knowledge infrastructures by simulating coherence while bypassing traditional modes of citation, authority, and validation. This paper introduces the Situated Epistemic Infrastructures (SEI) framework as a diagnostic tool for analyzing how knowledge becomes authoritative across hybrid human-machine systems under post-coherence conditions. Rather than relying on stable scholarly domains or bounded communities of practice, SEI traces how credibility is mediated across institutional, computational, and temporal arrangements. Integrating insights from infrastructure studies, platform theory, and epistemology, the framework foregrounds coordination over classification, emphasizing the need for anticipatory and adaptive models of epistemic stewardship. The paper contributes to debates on AI governance, knowledge production, and the ethical design of information systems by offering a robust alternative to representationalist models of scholarly communication.
Related papers
- Medical Reasoning in the Era of LLMs: A Systematic Review of Enhancement Techniques and Applications [59.721265428780946]
Large Language Models (LLMs) in medicine have enabled impressive capabilities, yet a critical gap remains in their ability to perform systematic, transparent, and verifiable reasoning.<n>This paper provides the first systematic review of this emerging field.<n>We propose a taxonomy of reasoning enhancement techniques, categorized into training-time strategies and test-time mechanisms.
arXiv Detail & Related papers (2025-08-01T14:41:31Z) - Ontology-based knowledge representation for bone disease diagnosis: a foundation for safe and sustainable medical artificial intelligence systems [0.0]
This study presents an ontology-based framework for bone disease diagnosis, developed in collaboration with Ho Chi Minh City Hospital for Traumatology and Orthopedics.<n>The methodology maintains clinical interpretability through systematic knowledge digitization, standardized medical terminology mapping, and modular architecture design.<n>Future work will focus on expanding the clinical dataset and conducting comprehensive system validation.
arXiv Detail & Related papers (2025-06-05T08:41:23Z) - Information Science Principles of Machine Learning: A Causal Chain Meta-Framework Based on Formalized Information Mapping [1.145820303039203]
This study focuses on addressing the current lack of a unified formal theoretical framework in machine learning.<n>universal definitions for model interpretability and ethical safety are proposed.<n>Three key theorems are proved: the equivalence of model interpretability and information recoverability, the assurance of ethical safety, and the estimation of generalization error.
arXiv Detail & Related papers (2025-05-19T14:39:41Z) - PyTDC: A multimodal machine learning training, evaluation, and inference platform for biomedical foundation models [59.17570021208177]
PyTDC is a machine-learning platform providing streamlined training, evaluation, and inference software for multimodal biological AI models.<n>This paper discusses the components of PyTDC's architecture and, to our knowledge, the first-of-its-kind case study on the introduced single-cell drug-target nomination ML task.
arXiv Detail & Related papers (2025-05-08T18:15:38Z) - KERAIA: An Adaptive and Explainable Framework for Dynamic Knowledge Representation and Reasoning [46.85451489222176]
KERAIA is a novel framework and software platform for symbolic knowledge engineering.<n>It addresses the persistent challenges of representing, reasoning with, and executing knowledge in dynamic, complex, and context-sensitive environments.
arXiv Detail & Related papers (2025-05-07T10:56:05Z) - A process algebraic framework for multi-agent dynamic epistemic systems [55.2480439325792]
We propose a unifying framework for modeling and analyzing multi-agent, knowledge-based, dynamic systems.
On the modeling side, we propose a process algebraic, agent-oriented specification language that makes such a framework easy to use for practical purposes.
arXiv Detail & Related papers (2024-07-24T08:35:50Z) - Towards Ontology-Enhanced Representation Learning for Large Language Models [0.18416014644193066]
We propose a novel approach to improve an embedding-Large Language Model (embedding-LLM) of interest by infusing knowledge by a reference ontology.
The linguistic information (i.e. concept synonyms and descriptions) and structural information (i.e. is-a relations) are utilized to compile a comprehensive set of concept definitions.
These concept definitions are then employed to fine-tune the target embedding-LLM using a contrastive learning framework.
arXiv Detail & Related papers (2024-05-30T23:01:10Z) - Knowledge-Aware Bayesian Deep Topic Model [50.58975785318575]
We propose a Bayesian generative model for incorporating prior domain knowledge into hierarchical topic modeling.
Our proposed model efficiently integrates the prior knowledge and improves both hierarchical topic discovery and document representation.
arXiv Detail & Related papers (2022-09-20T09:16:05Z) - EBOCA: Evidences for BiOmedical Concepts Association Ontology [55.41644538483948]
This paper proposes EBOCA, an ontology that describes (i) biomedical domain concepts and associations between them, and (ii) evidences supporting these associations.
Test data coming from a subset of DISNET and automatic association extractions from texts has been transformed to create a Knowledge Graph that can be used in real scenarios.
arXiv Detail & Related papers (2022-08-01T18:47:03Z) - Biologically-informed deep learning models for cancer: fundamental
trends for encoding and interpreting oncology data [0.0]
We provide a structured literature analysis focused on Deep Learning (DL) models used to support inference in cancer biology.
The work focuses on how existing models address the need for better dialogue with prior knowledge, biological plausibility and interpretability.
arXiv Detail & Related papers (2022-07-02T12:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.