The Vector Grounding Problem
- URL: http://arxiv.org/abs/2304.01481v1
- Date: Tue, 4 Apr 2023 02:54:04 GMT
- Title: The Vector Grounding Problem
- Authors: Dimitri Coelho Mollo, Rapha\"el Milli\`ere
- Abstract summary: We argue that referential grounding is the one that lies at the heart of the Vector Grounding Problem.
We also argue that, perhaps unexpectedly, multimodality and embodiment are neither necessary nor sufficient conditions for referential grounding in artificial systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The remarkable performance of large language models (LLMs) on complex
linguistic tasks has sparked a lively debate on the nature of their
capabilities. Unlike humans, these models learn language exclusively from
textual data, without direct interaction with the real world. Nevertheless,
they can generate seemingly meaningful text about a wide range of topics. This
impressive accomplishment has rekindled interest in the classical 'Symbol
Grounding Problem,' which questioned whether the internal representations and
outputs of classical symbolic AI systems could possess intrinsic meaning.
Unlike these systems, modern LLMs are artificial neural networks that compute
over vectors rather than symbols. However, an analogous problem arises for such
systems, which we dub the Vector Grounding Problem. This paper has two primary
objectives. First, we differentiate various ways in which internal
representations can be grounded in biological or artificial systems,
identifying five distinct notions discussed in the literature: referential,
sensorimotor, relational, communicative, and epistemic grounding.
Unfortunately, these notions of grounding are often conflated. We clarify the
differences between them, and argue that referential grounding is the one that
lies at the heart of the Vector Grounding Problem. Second, drawing on theories
of representational content in philosophy and cognitive science, we propose
that certain LLMs, particularly those fine-tuned with Reinforcement Learning
from Human Feedback (RLHF), possess the necessary features to overcome the
Vector Grounding Problem, as they stand in the requisite causal-historical
relations to the world that underpin intrinsic meaning. We also argue that,
perhaps unexpectedly, multimodality and embodiment are neither necessary nor
sufficient conditions for referential grounding in artificial systems.
Related papers
- Grounding from an AI and Cognitive Science Lens [4.624355582375099]
This article explores grounding from both cognitive science and machine learning perspectives.
It identifies the subtleties of grounding, its significance for collaborative agents, and similarities and differences in grounding approaches in both communities.
arXiv Detail & Related papers (2024-02-19T17:44:34Z) - Three Pathways to Neurosymbolic Reinforcement Learning with
Interpretable Model and Policy Networks [4.242435932138821]
We study a class of neural networks that build interpretable semantics directly into their architecture.
We reveal and highlight both the potential and the essential difficulties of combining logic, simulation, and learning.
arXiv Detail & Related papers (2024-02-07T23:00:24Z) - Exploring Spatial Schema Intuitions in Large Language and Vision Models [8.944921398608063]
We investigate whether large language models (LLMs) effectively capture implicit human intuitions about building blocks of language.
Surprisingly, correlations between model outputs and human responses emerge, revealing adaptability without a tangible connection to embodied experiences.
This research contributes to a nuanced understanding of the interplay between language, spatial experiences, and computations made by large language models.
arXiv Detail & Related papers (2024-02-01T19:25:50Z) - Grounding for Artificial Intelligence [8.13763396934359]
grounding is the process of connecting the natural language and abstract knowledge to the internal representation of the real world in an intelligent being.
This paper makes an attempt to systematically study this problem.
arXiv Detail & Related papers (2023-12-15T04:45:48Z) - Visually Grounded Language Learning: a review of language games,
datasets, tasks, and models [60.2604624857992]
Many Vision+Language (V+L) tasks have been defined with the aim of creating models that can ground symbols in the visual modality.
In this work, we provide a systematic literature review of several tasks and models proposed in the V+L field.
arXiv Detail & Related papers (2023-12-05T02:17:29Z) - Grounding Gaps in Language Model Generations [67.79817087930678]
We study whether large language models generate text that reflects human grounding.
We find that -- compared to humans -- LLMs generate language with less conversational grounding.
To understand the roots of the identified grounding gap, we examine the role of instruction tuning and preference optimization.
arXiv Detail & Related papers (2023-11-15T17:40:27Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning [59.16962123636579]
This paper proposes a new take on Prolog-based inference engines.
We replace handcrafted rules with a combination of neural language modeling, guided generation, and semi dense retrieval.
Our implementation, NELLIE, is the first system to demonstrate fully interpretable, end-to-end grounded QA.
arXiv Detail & Related papers (2022-09-16T00:54:44Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Provable Limitations of Acquiring Meaning from Ungrounded Form: What
will Future Language Models Understand? [87.20342701232869]
We investigate the abilities of ungrounded systems to acquire meaning.
We study whether assertions enable a system to emulate representations preserving semantic relations like equivalence.
We find that assertions enable semantic emulation if all expressions in the language are referentially transparent.
However, if the language uses non-transparent patterns like variable binding, we show that emulation can become an uncomputable problem.
arXiv Detail & Related papers (2021-04-22T01:00:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.