Ecological Semantics: Programming Environments for Situated Language
Understanding
- URL: http://arxiv.org/abs/2003.04567v2
- Date: Sun, 24 May 2020 07:48:05 GMT
- Title: Ecological Semantics: Programming Environments for Situated Language
Understanding
- Authors: Ronen Tamari, Gabriel Stanovsky, Dafna Shahaf and Reut Tsarfaty
- Abstract summary: Grounded language learning approaches offer the promise of deeper understanding by situating learning in richer, more structured training environments.
We propose treating environments as "first-class citizens" in semantic representations.
We argue that models must begin to understand and program in the language of affordances.
- Score: 25.853707930426175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale natural language understanding (NLU) systems have made impressive
progress: they can be applied flexibly across a variety of tasks, and employ
minimal structural assumptions. However, extensive empirical research has shown
this to be a double-edged sword, coming at the cost of shallow understanding:
inferior generalization, grounding and explainability. Grounded language
learning approaches offer the promise of deeper understanding by situating
learning in richer, more structured training environments, but are limited in
scale to relatively narrow, predefined domains. How might we enjoy the best of
both worlds: grounded, general NLU? Following extensive contemporary cognitive
science, we propose treating environments as "first-class citizens" in semantic
representations, worthy of research and development in their own right.
Importantly, models should also be partners in the creation and configuration
of environments, rather than just actors within them, as in existing
approaches. To do so, we argue that models must begin to understand and program
in the language of affordances (which define possible actions in a given
situation) both for online, situated discourse comprehension, as well as
large-scale, offline common-sense knowledge mining. To this end we propose an
environment-oriented ecological semantics, outlining theoretical and practical
approaches towards implementation. We further provide actual demonstrations
building upon interactive fiction programming languages.
Related papers
- Neurosymbolic Graph Enrichment for Grounded World Models [47.92947508449361]
We present a novel approach to enhance and exploit LLM reactive capability to address complex problems.
We create a multimodal, knowledge-augmented formal representation of meaning that combines the strengths of large language models with structured semantic representations.
By bridging the gap between unstructured language models and formal semantic structures, our method opens new avenues for tackling intricate problems in natural language understanding and reasoning.
arXiv Detail & Related papers (2024-11-19T17:23:55Z) - "What's my model inside of?": Exploring the role of environments for
grounded natural language understanding [1.8829370712240063]
In this thesis we adopt an ecological approach to grounded natural language understanding (NLU) research.
We develop novel training and annotation approaches for procedural text understanding based on text-based game environments.
We propose a design for AI-augmented "social thinking environments" for knowledge workers like scientists.
arXiv Detail & Related papers (2024-02-04T15:52:46Z) - Navigation with Large Language Models: Semantic Guesswork as a Heuristic
for Planning [73.0990339667978]
Navigation in unfamiliar environments presents a major challenge for robots.
We use language models to bias exploration of novel real-world environments.
We evaluate LFG in challenging real-world environments and simulated benchmarks.
arXiv Detail & Related papers (2023-10-16T06:21:06Z) - Do As I Can, Not As I Say: Grounding Language in Robotic Affordances [119.29555551279155]
Large language models can encode a wealth of semantic knowledge about the world.
Such knowledge could be extremely useful to robots aiming to act upon high-level, temporally extended instructions expressed in natural language.
We show how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally-extended instructions.
arXiv Detail & Related papers (2022-04-04T17:57:11Z) - SILG: The Multi-environment Symbolic Interactive Language Grounding
Benchmark [62.34200575624785]
We propose the multi-environment Interactive Language Grounding benchmark (SILG)
SILG consists of grid-world environments that require generalization to new dynamics, entities, and partially observed worlds (RTFM, Messenger, NetHack)
We evaluate recent advances such as egocentric local convolution, recurrent state-tracking, entity-centric attention, and pretrained LM using SILG.
arXiv Detail & Related papers (2021-10-20T17:02:06Z) - Neural Abstructions: Abstractions that Support Construction for Grounded
Language Learning [69.1137074774244]
Leveraging language interactions effectively requires addressing limitations in the two most common approaches to language grounding.
We introduce the idea of neural abstructions: a set of constraints on the inference procedure of a label-conditioned generative model.
We show that with this method a user population is able to build a semantic modification for an open-ended house task in Minecraft.
arXiv Detail & Related papers (2021-07-20T07:01:15Z) - How could Neural Networks understand Programs? [67.4217527949013]
It is difficult to build a model to better understand programs, by either directly applying off-the-shelf NLP pre-training techniques to the source code, or adding features to the model by theshelf.
We propose a novel program semantics learning paradigm, that the model should learn from information composed of (1) the representations which align well with the fundamental operations in operational semantics, and (2) the information of environment transition.
arXiv Detail & Related papers (2021-05-10T12:21:42Z) - Language (Re)modelling: Towards Embodied Language Understanding [33.50428967270188]
This work proposes an approach to representation and learning based on the tenets of embodied cognitive linguistics (ECL)
According to ECL, natural language is inherently executable (like programming languages)
This position paper argues that the use of grounding by metaphoric inference and simulation will greatly benefit NLU systems.
arXiv Detail & Related papers (2020-05-01T10:57:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.