Neural Abstructions: Abstractions that Support Construction for Grounded
Language Learning
- URL: http://arxiv.org/abs/2107.09285v1
- Date: Tue, 20 Jul 2021 07:01:15 GMT
- Title: Neural Abstructions: Abstractions that Support Construction for Grounded
Language Learning
- Authors: Kaylee Burns, Christopher D. Manning, Li Fei-Fei
- Abstract summary: Leveraging language interactions effectively requires addressing limitations in the two most common approaches to language grounding.
We introduce the idea of neural abstructions: a set of constraints on the inference procedure of a label-conditioned generative model.
We show that with this method a user population is able to build a semantic modification for an open-ended house task in Minecraft.
- Score: 69.1137074774244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although virtual agents are increasingly situated in environments where
natural language is the most effective mode of interaction with humans, these
exchanges are rarely used as an opportunity for learning. Leveraging language
interactions effectively requires addressing limitations in the two most common
approaches to language grounding: semantic parsers built on top of fixed object
categories are precise but inflexible and end-to-end models are maximally
expressive, but fickle and opaque. Our goal is to develop a system that
balances the strengths of each approach so that users can teach agents new
instructions that generalize broadly from a single example. We introduce the
idea of neural abstructions: a set of constraints on the inference procedure of
a label-conditioned generative model that can affect the meaning of the label
in context. Starting from a core programming language that operates over
abstructions, users can define increasingly complex mappings from natural
language to actions. We show that with this method a user population is able to
build a semantic parser for an open-ended house modification task in Minecraft.
The semantic parser that results is both flexible and expressive: the
percentage of utterances sourced from redefinitions increases steadily over the
course of 191 total exchanges, achieving a final value of 28%.
Related papers
- Interpretable Robotic Manipulation from Language [11.207620790833271]
We introduce an explainable behavior cloning agent, named Ex-PERACT, specifically designed for manipulation tasks.
At the top level, the model is tasked with learning a discrete skill code, while at the bottom level, the policy network translates the problem into a voxelized grid and maps the discretized actions to voxel grids.
We evaluate our method across eight challenging manipulation tasks utilizing the RLBench benchmark, demonstrating that Ex-PERACT not only achieves competitive policy performance but also effectively bridges the gap between human instructions and machine execution in complex environments.
arXiv Detail & Related papers (2024-05-27T11:02:21Z) - On Robustness of Prompt-based Semantic Parsing with Large Pre-trained
Language Model: An Empirical Study on Codex [48.588772371355816]
This paper presents the first empirical study on the adversarial robustness of a large prompt-based language model of code, codex.
Our results demonstrate that the state-of-the-art (SOTA) code-language models are vulnerable to carefully crafted adversarial examples.
arXiv Detail & Related papers (2023-01-30T13:21:00Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - Distilling Linguistic Context for Language Model Compression [27.538080564616703]
A computationally expensive and memory intensive neural network lies behind the recent success of language representation learning.
We present a new knowledge distillation objective for language representation learning that transfers the contextual knowledge via two types of relationships.
We validate the effectiveness of our method on challenging benchmarks of language understanding tasks.
arXiv Detail & Related papers (2021-09-17T05:51:45Z) - Constrained Language Models Yield Few-Shot Semantic Parsers [73.50960967598654]
We explore the use of large pretrained language models as few-shot semantics.
The goal in semantic parsing is to generate a structured meaning representation given a natural language input.
We use language models to paraphrase inputs into a controlled sublanguage resembling English that can be automatically mapped to a target meaning representation.
arXiv Detail & Related papers (2021-04-18T08:13:06Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - Learning Adaptive Language Interfaces through Decomposition [89.21937539950966]
We introduce a neural semantic parsing system that learns new high-level abstractions through decomposition.
Users interactively teach the system by breaking down high-level utterances describing novel behavior into low-level steps.
arXiv Detail & Related papers (2020-10-11T08:27:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.