On measuring grounding and generalizing grounding problems
- URL: http://arxiv.org/abs/2512.06205v1
- Date: Fri, 05 Dec 2025 22:58:47 GMT
- Title: On measuring grounding and generalizing grounding problems
- Authors: Daniel Quigley, Eric Maynard,
- Abstract summary: The symbol grounding problem asks how tokens cat can be about cats, as opposed to mere shapes manipulated in calculus.<n>We recast grounding from a binary judgment into an audit across desiderata, each indexed by an evaluation.<n>We apply this framework to four grounding modes (symbolic; referential; vectorial; relational) and three case studies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The symbol grounding problem asks how tokens like cat can be about cats, as opposed to mere shapes manipulated in a calculus. We recast grounding from a binary judgment into an audit across desiderata, each indexed by an evaluation tuple (context, meaning type, threat model, reference distribution): authenticity (mechanisms reside inside the agent and, for strong claims, were acquired through learning or evolution); preservation (atomic meanings remain intact); faithfulness, both correlational (realized meanings match intended ones) and etiological (internal mechanisms causally contribute to success); robustness (graceful degradation under declared perturbations); compositionality (the whole is built systematically from the parts). We apply this framework to four grounding modes (symbolic; referential; vectorial; relational) and three case studies: model-theoretic semantics achieves exact composition but lacks etiological warrant; large language models show correlational fit and local robustness for linguistic tasks, yet lack selection-for-success on world tasks without grounded interaction; human language meets the desiderata under strong authenticity through evolutionary and developmental acquisition. By operationalizing a philosophical inquiry about representation, we equip philosophers of science, computer scientists, linguists, and mathematicians with a common language and technical framework for systematic investigation of grounding and meaning.
Related papers
- A Multimodal Framework for Aligning Human Linguistic Descriptions with Visual Perceptual Data [0.0]
We introduce a computational framework designed to model core aspects of human referential interpretation.<n>We evaluate the model on the Stanford Repeated Reference Game corpus.<n>Results suggest that relatively simple perceptual-linguistic alignment mechanisms can yield human-competitive behavior.
arXiv Detail & Related papers (2026-02-23T07:20:11Z) - The Mechanistic Emergence of Symbol Grounding in Language Models [27.407379293112587]
Symbol grounding describes how symbols acquire their meanings by connecting to real-world sensorimotor experiences.<n>Recent work has shown preliminary evidence that grounding may emerge in (vision-language) models trained at scale without using explicit grounding objectives.<n>Our results provide behavioral and mechanistic evidence that symbol grounding can emerge in language models.
arXiv Detail & Related papers (2025-10-15T17:56:15Z) - Knowledge Graph-Infused Fine-Tuning for Structured Reasoning in Large Language Models [41.59092188743925]
It proposes a fine-tuning algorithm framework based on knowledge graph injection.<n>It builds on pretrained language models and introduces structured graph information for auxiliary learning.<n>It demonstrates better semantic consistency and contextual logic modeling in scenarios involving structural reasoning and entity extraction.
arXiv Detail & Related papers (2025-08-20T04:52:12Z) - Generative Models as a Complex Systems Science: How can we make sense of
large language model behavior? [75.79305790453654]
Coaxing out desired behavior from pretrained models, while avoiding undesirable ones, has redefined NLP.
We argue for a systematic effort to decompose language model behavior into categories that explain cross-task performance.
arXiv Detail & Related papers (2023-07-31T22:58:41Z) - Agentivit\`a e telicit\`a in GilBERTo: implicazioni cognitive [77.71680953280436]
The goal of this study is to investigate whether a Transformer-based neural language model infers lexical semantics.
The semantic properties considered are telicity (also combined with definiteness) and agentivity.
arXiv Detail & Related papers (2023-07-06T10:52:22Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - The Vector Grounding Problem [0.047888359248129786]
Drawing on philosophical theories of representational content, we argue that LLMs can achieve referential grounding.<n>One potentially surprising implication of our discussion is that multimodality and embodiment are neither necessary nor sufficient to overcome the Grounding Problem.
arXiv Detail & Related papers (2023-04-04T02:54:04Z) - How Do Transformers Learn Topic Structure: Towards a Mechanistic
Understanding [56.222097640468306]
We provide mechanistic understanding of how transformers learn "semantic structure"
We show, through a combination of mathematical analysis and experiments on Wikipedia data, that the embedding layer and the self-attention layer encode the topical structure.
arXiv Detail & Related papers (2023-03-07T21:42:17Z) - Compositional Generalization in Grounded Language Learning via Induced
Model Sparsity [81.38804205212425]
We consider simple language-conditioned navigation problems in a grid world environment with disentangled observations.
We design an agent that encourages sparse correlations between words in the instruction and attributes of objects, composing them together to find the goal.
Our agent maintains a high level of performance on goals containing novel combinations of properties even when learning from a handful of demonstrations.
arXiv Detail & Related papers (2022-07-06T08:46:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.