Laziness Is a Virtue When It Comes to Compositionality in Neural
Semantic Parsing
- URL: http://arxiv.org/abs/2305.04346v1
- Date: Sun, 7 May 2023 17:53:08 GMT
- Title: Laziness Is a Virtue When It Comes to Compositionality in Neural
Semantic Parsing
- Authors: Maxwell Crouse, Pavan Kapanipathi, Subhajit Chaudhury, Tahira Naseem,
Ramon Astudillo, Achille Fokoue, Tim Klinger
- Abstract summary: We introduce a neural semantic parsing generation method that constructs logical forms from the bottom up, beginning from the logical form's leaves.
We show that our novel, bottom-up parsing semantic technique outperforms general-purpose semantics while also being competitive with comparable neurals.
- Score: 20.856601758389544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nearly all general-purpose neural semantic parsers generate logical forms in
a strictly top-down autoregressive fashion. Though such systems have achieved
impressive results across a variety of datasets and domains, recent works have
called into question whether they are ultimately limited in their ability to
compositionally generalize. In this work, we approach semantic parsing from,
quite literally, the opposite direction; that is, we introduce a neural
semantic parsing generation method that constructs logical forms from the
bottom up, beginning from the logical form's leaves. The system we introduce is
lazy in that it incrementally builds up a set of potential semantic parses, but
only expands and processes the most promising candidate parses at each
generation step. Such a parsimonious expansion scheme allows the system to
maintain an arbitrarily large set of parse hypotheses that are never realized
and thus incur minimal computational overhead. We evaluate our approach on
compositional generalization; specifically, on the challenging CFQ dataset and
three Text-to-SQL datasets where we show that our novel, bottom-up semantic
parsing technique outperforms general-purpose semantic parsers while also being
competitive with comparable neural parsers that have been designed for each
task.
Related papers
- LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Variational Cross-Graph Reasoning and Adaptive Structured Semantics
Learning for Compositional Temporal Grounding [143.5927158318524]
Temporal grounding is the task of locating a specific segment from an untrimmed video according to a query sentence.
We introduce a new Compositional Temporal Grounding task and construct two new dataset splits.
We argue that the inherent structured semantics inside the videos and language is the crucial factor to achieve compositional generalization.
arXiv Detail & Related papers (2023-01-22T08:02:23Z) - Hierarchical Neural Data Synthesis for Semantic Parsing [16.284764879030448]
We propose a purely neural approach of data augmentation for semantic parsing.
We achieve the state-of-the-art performance on the development set (77.2% accuracy) using our zero-shot augmentation.
arXiv Detail & Related papers (2021-12-04T01:33:08Z) - On The Ingredients of an Effective Zero-shot Semantic Parser [95.01623036661468]
We analyze zero-shot learning by paraphrasing training examples of canonical utterances and programs from a grammar.
We propose bridging these gaps using improved grammars, stronger paraphrasers, and efficient learning methods.
Our model achieves strong performance on two semantic parsing benchmarks (Scholar, Geo) with zero labeled data.
arXiv Detail & Related papers (2021-10-15T21:41:16Z) - Neural Abstructions: Abstractions that Support Construction for Grounded
Language Learning [69.1137074774244]
Leveraging language interactions effectively requires addressing limitations in the two most common approaches to language grounding.
We introduce the idea of neural abstructions: a set of constraints on the inference procedure of a label-conditioned generative model.
We show that with this method a user population is able to build a semantic modification for an open-ended house task in Minecraft.
arXiv Detail & Related papers (2021-07-20T07:01:15Z) - The Limitations of Limited Context for Constituency Parsing [27.271792317099045]
Parsing-Reading-Predict architecture of (Shen et al., 2018a) was first to perform unsupervised syntactic parsing.
What kind of syntactic structure can current neural approaches to syntax represent?
We ground this question in the sandbox of probabilistic-free-grammars (PCFGs)
We identify a key aspect of the representational power of these approaches: the amount and directionality of context that the predictor has access to.
arXiv Detail & Related papers (2021-06-03T03:58:35Z) - SyGNS: A Systematic Generalization Testbed Based on Natural Language
Semantics [39.845425535943534]
We propose a Systematic Generalization testbed based on Natural language Semantics (SyGNS)
We test whether neural networks can systematically parse sentences involving novel combinations of logical expressions such as quantifiers and negation.
Experiments show that Transformer and GRU models can generalize to unseen combinations of quantifiers, negations, and modifier that are similar to given training instances in form, but not to the others.
arXiv Detail & Related papers (2021-06-02T11:24:41Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - Compositional Generalization and Natural Language Variation: Can a
Semantic Parsing Approach Handle Both? [27.590858384414567]
We ask: can we develop a semantic parsing approach that handles both natural language variation and compositional generalization?
We propose new train and test splits of non-synthetic datasets to better assess this capability.
We also propose NQG-T5, a hybrid model that combines a high-precision grammar-based approach with a pre-trained sequence-to-sequence model.
arXiv Detail & Related papers (2020-10-24T00:38:27Z) - Compositional Generalization via Semantic Tagging [81.24269148865555]
We propose a new decoding framework that preserves the expressivity and generality of sequence-to-sequence models.
We show that the proposed approach consistently improves compositional generalization across model architectures, domains, and semantic formalisms.
arXiv Detail & Related papers (2020-10-22T15:55:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.