Grounding for Artificial Intelligence
- URL: http://arxiv.org/abs/2312.09532v1
- Date: Fri, 15 Dec 2023 04:45:48 GMT
- Title: Grounding for Artificial Intelligence
- Authors: Bing Liu
- Abstract summary: grounding is the process of connecting the natural language and abstract knowledge to the internal representation of the real world in an intelligent being.
This paper makes an attempt to systematically study this problem.
- Score: 8.13763396934359
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A core function of intelligence is grounding, which is the process of
connecting the natural language and abstract knowledge to the internal
representation of the real world in an intelligent being, e.g., a human. Human
cognition is grounded in our sensorimotor experiences in the external world and
subjective feelings in our internal world. We use languages to communicate with
each other and the languages are grounded on our shared sensorimotor
experiences and feelings. Without this shard grounding, it is impossible for us
to understand each other because all natural languages are highly abstract and
are only able to describe a tiny portion of what has happened or is happening
in the real world. Although grounding at high or abstract levels has been
studied in different fields and applications, to our knowledge, limited
systematic work at fine-grained levels has been done. With the rapid progress
of large language models (LLMs), it is imperative that we have a sound
understanding of grounding in order to move to the next level of intelligence.
It is also believed that grounding is necessary for Artificial General
Intelligence (AGI). This paper makes an attempt to systematically study this
problem.
Related papers
- Grounding from an AI and Cognitive Science Lens [4.624355582375099]
This article explores grounding from both cognitive science and machine learning perspectives.
It identifies the subtleties of grounding, its significance for collaborative agents, and similarities and differences in grounding approaches in both communities.
arXiv Detail & Related papers (2024-02-19T17:44:34Z) - Grounding Gaps in Language Model Generations [67.79817087930678]
We study whether large language models generate text that reflects human grounding.
We find that -- compared to humans -- LLMs generate language with less conversational grounding.
To understand the roots of the identified grounding gap, we examine the role of instruction tuning and preference optimization.
arXiv Detail & Related papers (2023-11-15T17:40:27Z) - Brain in a Vat: On Missing Pieces Towards Artificial General
Intelligence in Large Language Models [83.63242931107638]
We propose four characteristics of generally intelligent agents.
We argue that active engagement with objects in the real world delivers more robust signals for forming conceptual representations.
We conclude by outlining promising future research directions in the field of artificial general intelligence.
arXiv Detail & Related papers (2023-07-07T13:58:16Z) - The Vector Grounding Problem [0.0]
We argue that referential grounding is the one that lies at the heart of the Vector Grounding Problem.
We also argue that, perhaps unexpectedly, multimodality and embodiment are neither necessary nor sufficient conditions for referential grounding in artificial systems.
arXiv Detail & Related papers (2023-04-04T02:54:04Z) - Imagination-Augmented Natural Language Understanding [71.51687221130925]
We introduce an Imagination-Augmented Cross-modal (iACE) to solve natural language understanding tasks.
iACE enables visual imagination with external knowledge transferred from the powerful generative and pre-trained vision-and-language models.
Experiments on GLUE and SWAG show that iACE achieves consistent improvement over visually-supervised pre-trained models.
arXiv Detail & Related papers (2022-04-18T19:39:36Z) - Do As I Can, Not As I Say: Grounding Language in Robotic Affordances [119.29555551279155]
Large language models can encode a wealth of semantic knowledge about the world.
Such knowledge could be extremely useful to robots aiming to act upon high-level, temporally extended instructions expressed in natural language.
We show how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally-extended instructions.
arXiv Detail & Related papers (2022-04-04T17:57:11Z) - Building Human-like Communicative Intelligence: A Grounded Perspective [1.0152838128195465]
After making astounding progress in language learning, AI systems seem to approach the ceiling that does not reflect important aspects of human communicative capacities.
This paper suggests that the dominant cognitively-inspired AI directions, based on nativist and symbolic paradigms, lack necessary substantiation and concreteness to guide progress in modern AI.
I propose a list of concrete, implementable components for building "grounded" linguistic intelligence.
arXiv Detail & Related papers (2022-01-02T01:43:24Z) - PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D
World [86.21137454228848]
We factorize PIGLeT into a physical dynamics model, and a separate language model.
PIGLeT can read a sentence, simulate neurally what might happen next, and then communicate that result through a literal symbolic representation.
It is able to correctly forecast "what happens next" given an English sentence over 80% of the time, outperforming a 100x larger, text-to-text approach by over 10%.
arXiv Detail & Related papers (2021-06-01T02:32:12Z) - Crossmodal Language Grounding in an Embodied Neurocognitive Model [28.461246169379685]
Human infants are able to acquire natural language seemingly easily at an early age.
From a neuroscientific perspective, natural language is embodied, grounded in most, if not all, sensory and sensorimotor modalities.
We present a neurocognitive model for language grounding which reflects bio-inspired mechanisms.
arXiv Detail & Related papers (2020-06-24T08:12:09Z) - Experience Grounds Language [185.73483760454454]
Language understanding research is held back by a failure to relate language to the physical world it describes and to the social interactions it facilitates.
Despite the incredible effectiveness of language processing models to tackle tasks after being trained on text alone, successful linguistic communication relies on a shared experience of the world.
arXiv Detail & Related papers (2020-04-21T16:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.