A time for monsters: Organizational knowing after LLMs
- URL: http://arxiv.org/abs/2511.15762v1
- Date: Wed, 19 Nov 2025 14:07:47 GMT
- Title: A time for monsters: Organizational knowing after LLMs
- Authors: Samer Faraj, Joel Perez Torrents, Saku Mantere, Anand Bhardwaj,
- Abstract summary: Large Language Models (LLMs) are reshaping organizational knowing by unsettling foundations of representational and practice-based perspectives.<n>We conceptualize LLMs as Haraway-ian monsters, that is, hybrid categories, boundary-crossing entities that destabilize established categories while opening new possibilities for inquiry.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large Language Models (LLMs) are reshaping organizational knowing by unsettling the epistemological foundations of representational and practice-based perspectives. We conceptualize LLMs as Haraway-ian monsters, that is, hybrid, boundary-crossing entities that destabilize established categories while opening new possibilities for inquiry. Focusing on analogizing as a fundamental driver of knowledge, we examine how LLMs generate connections through large-scale statistical inference. Analyzing their operation across the dimensions of surface/deep analogies and near/far domains, we highlight both their capacity to expand organizational knowing and the epistemic risks they introduce. Building on this, we identify three challenges of living with such epistemic monsters: the transformation of inquiry, the growing need for dialogical vetting, and the redistribution of agency. By foregrounding the entangled dynamics of knowing-with-LLMs, the paper extends organizational theory beyond human-centered epistemologies and invites renewed attention to how knowledge is created, validated, and acted upon in the age of intelligent technologies.
Related papers
- Large language models for spreading dynamics in complex systems [15.581915022853337]
Spreading dynamics is a central topic in the physics of complex systems and network science.<n>Large language models (LLMs) have exhibited strong capabilities in natural language understanding, reasoning, and generation.<n>LLMs can act as interactive agents embedded in propagation systems, potentially influencing spreading pathways and feedback structures.
arXiv Detail & Related papers (2026-02-08T18:58:43Z) - LLM-empowered knowledge graph construction: A survey [0.0]
Knowledge Graphs have long served as a fundamental infrastructure for structured knowledge representation and reasoning.<n>With the advent of Large Language Models (LLMs), the construction of KGs has entered a new paradigm-shifting from rule-based and statistical pipelines to language-driven and generative frameworks.
arXiv Detail & Related papers (2025-10-23T08:43:28Z) - From Perception to Cognition: A Survey of Vision-Language Interactive Reasoning in Multimodal Large Language Models [66.36007274540113]
Multimodal Large Language Models (MLLMs) strive to achieve a profound, human-like understanding of and interaction with the physical world.<n>They often exhibit a shallow and incoherent integration when acquiring information (Perception) and conducting reasoning (Cognition)<n>This survey introduces a novel and unified analytical framework: From Perception to Cognition"
arXiv Detail & Related papers (2025-09-29T18:25:40Z) - Knowledge Homophily in Large Language Models [75.12297135039776]
We investigate an analogous knowledge homophily pattern in Large Language Models (LLMs)<n>We map LLM knowledge into a graph representation through knowledge checking at both the triplet and entity levels.<n>Motivated by this homophily principle, we propose a Graph Neural Network (GNN) regression model to estimate entity-level knowledgeability scores for triplets.
arXiv Detail & Related papers (2025-09-28T09:40:27Z) - Unraveling the cognitive patterns of Large Language Models through module communities [45.399985422756224]
Large Language Models (LLMs) have reshaped our world with significant advancements in science, engineering, and society.<n>Despite their ubiquity and utility, the underlying mechanisms of LLM remain concealed within billions of parameters and complex structures.<n>We address this gap by adopting approaches to understanding emerging cognition in biology.
arXiv Detail & Related papers (2025-08-25T16:49:38Z) - Exploring Consciousness in LLMs: A Systematic Survey of Theories, Implementations, and Frontier Risks [46.93509559847712]
Consciousness is one of the most profound and distinguishing features of the human mind.<n>As large language models (LLMs) develop at an unprecedented pace, questions concerning intelligence and consciousness have become increasingly significant.
arXiv Detail & Related papers (2025-05-26T10:40:52Z) - Unveiling Knowledge Utilization Mechanisms in LLM-based Retrieval-Augmented Generation [77.10390725623125]
retrieval-augmented generation (RAG) is widely employed to expand their knowledge scope.<n>Since RAG has shown promise in knowledge-intensive tasks like open-domain question answering, its broader application to complex tasks and intelligent assistants has further advanced its utility.<n>We present a systematic investigation of the intrinsic mechanisms by which RAGs integrate internal (parametric) and external (retrieved) knowledge.
arXiv Detail & Related papers (2025-05-17T13:13:13Z) - A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems [93.8285345915925]
Reasoning is a fundamental cognitive process that enables logical inference, problem-solving, and decision-making.<n>With the rapid advancement of large language models (LLMs), reasoning has emerged as a key capability that distinguishes advanced AI systems.<n>We categorize existing methods along two dimensions: (1) Regimes, which define the stage at which reasoning is achieved; and (2) Architectures, which determine the components involved in the reasoning process.
arXiv Detail & Related papers (2025-04-12T01:27:49Z) - LogiDynamics: Unraveling the Dynamics of Inductive, Abductive and Deductive Logical Inferences in LLM Reasoning [74.0242521818214]
This paper systematically investigates the comparative dynamics of inductive (System 1) versus abductive/deductive (System 2) inference in large language models (LLMs)<n>We utilize a controlled analogical reasoning environment, varying modality (textual, visual, symbolic), difficulty, and task format (MCQ / free-text)<n>Our analysis reveals System 2 pipelines generally excel, particularly in visual/symbolic modalities and harder tasks, while System 1 is competitive for textual and easier problems.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making [51.737762570776006]
LLM-ACTR is a novel neuro-symbolic architecture that provides human-aligned and versatile decision-making.
Our framework extracts and embeds knowledge of ACT-R's internal decision-making process as latent neural representations.
Our experiments on novel Design for Manufacturing tasks show both improved task performance as well as improved grounded decision-making capability.
arXiv Detail & Related papers (2024-08-17T11:49:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.