In Dialogue with Intelligence: Rethinking Large Language Models as Collective Knowledge
- URL: http://arxiv.org/abs/2505.22767v2
- Date: Fri, 30 May 2025 16:31:58 GMT
- Title: In Dialogue with Intelligence: Rethinking Large Language Models as Collective Knowledge
- Authors: Eleni Vasilaki,
- Abstract summary: Large Language Models (LLMs) are typically analysed through architectural, behavioural, or training-data lenses.<n>This article offers a theoretical and experiential re-framing: LLMs as dynamic instantiations of Collective human Knowledge (CK)<n>I examine emergent dialogue patterns, the implications of fine-tuning, and the notion of co-augmentation: mutual enhancement between human and machine cognition.
- Score: 2.50194939587674
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) are typically analysed through architectural, behavioural, or training-data lenses. This article offers a theoretical and experiential re-framing: LLMs as dynamic instantiations of Collective human Knowledge (CK), where intelligence is evoked through dialogue rather than stored statically. Drawing on concepts from neuroscience and AI, and grounded in sustained interaction with ChatGPT-4, I examine emergent dialogue patterns, the implications of fine-tuning, and the notion of co-augmentation: mutual enhancement between human and machine cognition. This perspective offers a new lens for understanding interaction, representation, and agency in contemporary AI systems.
Related papers
- Noosemia: toward a Cognitive and Phenomenological Account of Intentionality Attribution in Human-Generative AI Interaction [4.022364531869169]
This paper introduces and formalizes Noosemia, a novel cognitive-phenomenological phenomenon emerging from human interaction with generative AI systems.<n>We propose a multidisciplinary framework to explain how, under certain conditions, users attribute intentionality, agency, and even interiority to these systems.
arXiv Detail & Related papers (2025-08-04T17:10:08Z) - Model-Grounded Symbolic Artificial Intelligence Systems Learning and Reasoning with Model-Grounded Symbolic Artificial Intelligence Systems [7.000073566770884]
Neurosymbolic artificial intelligence (AI) systems combine neural network and classical symbolic AI mechanisms.<n>We develop novel learning and reasoning approaches that preserve structural similarities to traditional learning and reasoning paradigms.
arXiv Detail & Related papers (2025-07-14T01:34:05Z) - Seamless Interaction: Dyadic Audiovisual Motion Modeling and Large-Scale Dataset [113.25650486482762]
We introduce the Seamless Interaction dataset, a large-scale collection of over 4,000 hours of face-to-face interaction footage.<n>This dataset enables the development of AI technologies that understand dyadic embodied dynamics.<n>We develop a suite of models that utilize the dataset to generate dyadic motion gestures and facial expressions aligned with human speech.
arXiv Detail & Related papers (2025-06-27T18:09:49Z) - Discovering Conceptual Knowledge with Analytic Ontology Templates for Articulated Objects [42.9186628100765]
We aim to endow machine intelligence with an analogous capability through performing at the conceptual level.
AOT-driven approach yields benefits in three key perspectives.
arXiv Detail & Related papers (2024-09-18T04:53:38Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - DeepThought: An Architecture for Autonomous Self-motivated Systems [1.6385815610837167]
We argue that the internal architecture of large language models (LLMs) cannot support intrinsic motivations, agency, or some degree of consciousness.
We propose to integrate LLMs into an architecture for cognitive language agents able to exhibit properties akin to agency, self-motivation, even some features of meta-cognition.
arXiv Detail & Related papers (2023-11-14T21:20:23Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - MindDial: Belief Dynamics Tracking with Theory-of-Mind Modeling for Situated Neural Dialogue Generation [62.44907105496227]
MindDial is a novel conversational framework that can generate situated free-form responses with theory-of-mind modeling.
We introduce an explicit mind module that can track the speaker's belief and the speaker's prediction of the listener's belief.
Our framework is applied to both prompting and fine-tuning-based models, and is evaluated across scenarios involving both common ground alignment and negotiation.
arXiv Detail & Related papers (2023-06-27T07:24:32Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - Emotion Recognition in Conversation using Probabilistic Soft Logic [17.62924003652853]
emotion recognition in conversation (ERC) is a sub-field of emotion recognition that focuses on conversations that contain two or more utterances.
We implement our approach in a framework called Probabilistic Soft Logic (PSL), a declarative templating language.
PSL provides functionality for the incorporation of results from neural models into PSL models.
We compare our method with state-of-the-art purely neural ERC systems, and see almost a 20% improvement.
arXiv Detail & Related papers (2022-07-14T23:59:06Z) - Enabling Harmonious Human-Machine Interaction with Visual-Context
Augmented Dialogue System: A Review [40.49926141538684]
Visual Context Augmented Dialogue System (VAD) has the potential to communicate with humans by perceiving and understanding multimodal information.
VAD possesses the potential to generate engaging and context-aware responses.
arXiv Detail & Related papers (2022-07-02T09:31:37Z) - Retrieval Augmentation Reduces Hallucination in Conversation [49.35235945543833]
We explore the use of neural-retrieval-in-the-loop architectures for knowledge-grounded dialogue.
We show that our best models obtain state-of-the-art performance on two knowledge-grounded conversational tasks.
arXiv Detail & Related papers (2021-04-15T16:24:43Z) - Neurosymbolic AI for Situated Language Understanding [13.249453757295083]
We argue that computational situated grounding provides a solution to some of these learning challenges.
Our model reincorporates some ideas of classic AI into a framework of neurosymbolic intelligence.
We discuss how situated grounding provides diverse data and multiple levels of modeling for a variety of AI learning challenges.
arXiv Detail & Related papers (2020-12-05T05:03:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.