Grounding 'Grounding' in NLP
- URL: http://arxiv.org/abs/2106.02192v1
- Date: Fri, 4 Jun 2021 00:40:59 GMT
- Title: Grounding 'Grounding' in NLP
- Authors: Khyathi Raghavi Chandu, Yonatan Bisk, Alan W Black
- Abstract summary: As a community, we use the term broadly to reference any linking of text to data or non-textual modality.
Cognitive Science more formally defines "grounding" as the process of establishing what mutual information is required for successful communication.
- Score: 59.28887479119075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The NLP community has seen substantial recent interest in grounding to
facilitate interaction between language technologies and the world. However, as
a community, we use the term broadly to reference any linking of text to data
or non-textual modality. In contrast, Cognitive Science more formally defines
"grounding" as the process of establishing what mutual information is required
for successful communication between two interlocutors -- a definition which
might implicitly capture the NLP usage but differs in intent and scope. We
investigate the gap between these definitions and seek answers to the following
questions: (1) What aspects of grounding are missing from NLP tasks? Here we
present the dimensions of coordination, purviews and constraints. (2) How is
the term "grounding" used in the current research? We study the trends in
datasets, domains, and tasks introduced in recent NLP conferences. And finally,
(3) How to advance our current definition to bridge the gap with Cognitive
Science? We present ways to both create new tasks or repurpose existing ones to
make advancements towards achieving a more complete sense of grounding.
Related papers
- Grounding from an AI and Cognitive Science Lens [4.624355582375099]
This article explores grounding from both cognitive science and machine learning perspectives.
It identifies the subtleties of grounding, its significance for collaborative agents, and similarities and differences in grounding approaches in both communities.
arXiv Detail & Related papers (2024-02-19T17:44:34Z) - Survey of Natural Language Processing for Education: Taxonomy, Systematic Review, and Future Trends [26.90343340881045]
We review recent advances in NLP with the focus on solving problems relevant to the education domain.
We present a taxonomy of NLP in the education domain and highlight typical NLP applications including question answering, question construction, automated assessment, and error correction.
We conclude with six promising directions for future research, including more datasets in education domain, controllable usage of LLMs, intervention of difficulty-level control, interpretable educational NLP, methods with adaptive learning, and integrated systems for education.
arXiv Detail & Related papers (2024-01-15T07:48:42Z) - Grounding Gaps in Language Model Generations [67.79817087930678]
We study whether large language models generate text that reflects human grounding.
We find that -- compared to humans -- LLMs generate language with less conversational grounding.
To understand the roots of the identified grounding gap, we examine the role of instruction tuning and preference optimization.
arXiv Detail & Related papers (2023-11-15T17:40:27Z) - Beyond Good Intentions: Reporting the Research Landscape of NLP for
Social Good [115.1507728564964]
We introduce NLP4SG Papers, a scientific dataset with three associated tasks.
These tasks help identify NLP4SG papers and characterize the NLP4SG landscape.
We use state-of-the-art NLP models to address each of these tasks and use them on the entire ACL Anthology.
arXiv Detail & Related papers (2023-05-09T14:16:25Z) - An Inclusive Notion of Text [69.36678873492373]
We argue that clarity on the notion of text is crucial for reproducible and generalizable NLP.
We introduce a two-tier taxonomy of linguistic and non-linguistic elements that are available in textual sources and can be used in NLP modeling.
arXiv Detail & Related papers (2022-11-10T14:26:43Z) - Meta Learning for Natural Language Processing: A Survey [88.58260839196019]
Deep learning has been the mainstream technique in natural language processing (NLP) area.
Deep learning requires many labeled data and is less generalizable across domains.
Meta-learning is an arising field in machine learning studying approaches to learn better algorithms.
arXiv Detail & Related papers (2022-05-03T13:58:38Z) - Exploiting Scene Graphs for Human-Object Interaction Detection [81.49184987430333]
Human-Object Interaction (HOI) detection is a fundamental visual task aiming at localizing and recognizing interactions between humans and objects.
We propose a novel method to exploit this information, through the scene graph, for the Human-Object Interaction (SG2HOI) detection task.
Our method, SG2HOI, incorporates the SG information in two ways: (1) we embed a scene graph into a global context clue, serving as the scene-specific environmental context; and (2) we build a relation-aware message-passing module to gather relationships from objects' neighborhood and transfer them into interactions.
arXiv Detail & Related papers (2021-08-19T09:40:50Z) - Learning Zero-Shot Multifaceted Visually Grounded Word Embeddingsvia
Multi-Task Training [8.271859911016719]
Language grounding aims at linking the symbolic representation of language (e.g., words) into the rich perceptual knowledge of the outside world.
We argue that this approach sacrifices the abstract knowledge obtained from linguistic co-occurrence statistics in the process of acquiring perceptual information.
arXiv Detail & Related papers (2021-04-15T14:49:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.