Stop saying LLM: Large Discourse Models (LDM) and Artificial Discursive Agent (ADA)?
- URL: http://arxiv.org/abs/2512.19117v1
- Date: Mon, 22 Dec 2025 07:43:43 GMT
- Title: Stop saying LLM: Large Discourse Models (LDM) and Artificial Discursive Agent (ADA)?
- Authors: Amar Lakel,
- Abstract summary: This paper proposes a shift in the analysis of large generative models, replacing the category ''Large Language Models''<n>The proposed program aims to replace the ''fascination/fear'' dichotomy with public trials and procedures that make the place, uses, and limits of artificial discursive agents in contemporary social space decipherable.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes an epistemological shift in the analysis of large generative models, replacing the category ''Large Language Models'' (LLM) with that of ''Large Discourse Models'' (LDM), and then with that of Artificial Discursive Agent (ADA). The theoretical framework is based on an ontological triad distinguishing three regulatory instances: the apprehension of the phenomenal regularities of the referential world, the structuring of embodied cognition, and the structural-linguistic sedimentation of the utterance within a socio-historical context. LDMs, operating on the product of these three instances (the document), model the discursive projection of a portion of human experience reified by the learning corpus. The proposed program aims to replace the ''fascination/fear'' dichotomy with public trials and procedures that make the place, uses, and limits of artificial discursive agents in contemporary social space decipherable, situating this approach within a perspective of governance and co-regulation involving the State, industry, civil society, and academia.
Related papers
- How Controllable Are Large Language Models? A Unified Evaluation across Behavioral Granularities [75.10343190811592]
Large Language Models (LLMs) are increasingly deployed in socially sensitive domains.<n>Our benchmark offers a principled and interpretable framework for safe and controllable behavior.
arXiv Detail & Related papers (2026-03-03T03:50:13Z) - The Trinity of Consistency as a Defining Principle for General World Models [106.16462830681452]
General World Models are capable of learning, simulating, and reasoning about objective physical laws.<n>We propose a principled theoretical framework that defines the essential properties requisite for a General World Model.<n>Our work establishes a principled pathway toward general world models, clarifying both the limitations of current systems and the architectural requirements for future progress.
arXiv Detail & Related papers (2026-02-26T16:15:55Z) - Emergent Structured Representations Support Flexible In-Context Inference in Large Language Models [77.98801218316505]
Large language models (LLMs) exhibit emergent behaviors suggestive of human-like reasoning.<n>We investigate the internal processing of LLMs during in-context concept inference.
arXiv Detail & Related papers (2026-02-08T03:14:39Z) - A Survey of Vibe Coding with Large Language Models [93.88284590533242]
"Vibe Coding" is a development methodology where developers validate AI-generated implementations through outcome observation.<n>Despite its transformative potential, the effectiveness of this emergent paradigm remains under-explored.<n>This survey provides the first comprehensive and systematic review of Vibe Coding with large language models.
arXiv Detail & Related papers (2025-10-14T11:26:56Z) - How Good are Foundation Models in Step-by-Step Embodied Reasoning? [79.15268080287505]
Embodied agents must make decisions that are safe, spatially coherent, and grounded in context.<n>Recent advances in large multimodal models have shown promising capabilities in visual understanding and language generation.<n>Our benchmark includes over 1.1k samples with detailed step-by-step reasoning across 10 tasks and 8 embodiments.
arXiv Detail & Related papers (2025-09-18T17:56:30Z) - The 1st International Workshop on Disentangled Representation Learning for Controllable Generation (DRL4Real): Methods and Results [132.86866727471093]
This paper reviews the 1st International Workshop on Disentangled Representation Learning for Controllable Generation (DRL4Real), held in conjunction with ICCV 2025.<n>DRL4Real focused on evaluating DRL methods in practical applications such as controllable generation, exploring advancements in model robustness, interpretability, and generalization.<n>The workshop accepted 9 papers covering a broad range of topics, including the integration of novel inductive biases (e.g., language), the application of diffusion models to DRL, 3D-aware disentanglement, and the expansion of DRL into specialized domains like autonomous driving and EEG analysis.
arXiv Detail & Related papers (2025-08-15T16:35:41Z) - Pragmatics beyond humans: meaning, communication, and LLMs [0.0]
The paper reconceptualizes pragmatics as a dynamic interface through which language operates as a tool for action.<n>The paper argues that this understanding needs to be further refined and methodologically reconsidered.
arXiv Detail & Related papers (2025-08-08T09:34:41Z) - Modeling Open-World Cognition as On-Demand Synthesis of Probabilistic Models [93.1043186636177]
We explore the hypothesis that people use a combination of distributed and symbolic representations to construct bespoke mental models tailored to novel situations.<n>We propose a computational implementation of this idea -- a Model Synthesis Architecture''<n>We evaluate our MSA as a model of human judgments on a novel reasoning dataset.
arXiv Detail & Related papers (2025-07-16T18:01:03Z) - Large Language Models as Quasi-crystals: Coherence Without Repetition in Generative Text [0.0]
essay proposes an analogy between large language models (LLMs) and quasicrystals, systems that exhibit global coherence without periodic repetition, generated through local constraints.<n> Drawing on the history of quasicrystals, it highlights an alternative mode of coherence in generative language: constraint-based organization without repetition or symbolic intent.<n>This essay aims to reframe the current discussion around large language models, not by rejecting existing methods, but by suggesting an additional axis of interpretation grounded in structure rather than semantics.
arXiv Detail & Related papers (2025-04-16T11:27:47Z) - Generative Emergent Communication: Large Language Model is a Collective World Model [11.224401802231707]
Large Language Models (LLMs) have demonstrated a remarkable ability to capture extensive world knowledge.<n>This study proposes a novel theoretical solution by introducing the Collective World Model hypothesis.
arXiv Detail & Related papers (2024-12-31T02:23:10Z) - Language Models as Semiotic Machines: Reconceptualizing AI Language Systems through Structuralist and Post-Structuralist Theories of Language [0.0]
This paper proposes a novel framework for understanding large language models (LLMs)
I argue that LLMs should be understood as models of language itself, aligning with Jacques's concept of 'writing' (l'ecriture)
I apply Saussure's critique of Saussure to position 'writing' as the object modeled by LLMs, offering a view of the machine's'mind' as a statistical approximation of sign behavior.
arXiv Detail & Related papers (2024-10-16T21:45:54Z) - Theoretical and Methodological Framework for Studying Texts Produced by Large Language Models [0.0]
This paper addresses the conceptual, methodological and technical challenges in studying large language models (LLMs)
It builds on a theoretical framework that distinguishes between the LLM as a substrate and the entities the model simulates.
arXiv Detail & Related papers (2024-08-29T17:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.