Controllable Semantic Parsing via Retrieval Augmentation
- URL: http://arxiv.org/abs/2110.08458v1
- Date: Sat, 16 Oct 2021 03:34:49 GMT
- Title: Controllable Semantic Parsing via Retrieval Augmentation
- Authors: Panupong Pasupat and Yuan Zhang and Kelvin Guu
- Abstract summary: We propose ControllAble Semantic generative model via Exemplar Retrieval (CASPER)
We show that CASPER can parse queries in a new domain, adapt the prediction toward the specified patterns, or adapt to new semantic schemas without having to further re-train the model.
- Score: 14.528396278058285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In practical applications of semantic parsing, we often want to rapidly
change the behavior of the parser, such as enabling it to handle queries in a
new domain, or changing its predictions on certain targeted queries. While we
can introduce new training examples exhibiting the target behavior, a mechanism
for enacting such behavior changes without expensive model re-training would be
preferable. To this end, we propose ControllAble Semantic Parser via Exemplar
Retrieval (CASPER). Given an input query, the parser retrieves related
exemplars from a retrieval index, augments them to the query, and then applies
a generative seq2seq model to produce an output parse. The exemplars act as a
control mechanism over the generic generative model: by manipulating the
retrieval index or how the augmented query is constructed, we can manipulate
the behavior of the parser. On the MTOP dataset, in addition to achieving
state-of-the-art on the standard setup, we show that CASPER can parse queries
in a new domain, adapt the prediction toward the specified patterns, or adapt
to new semantic schemas without having to further re-train the model.
Related papers
- Effective Instruction Parsing Plugin for Complex Logical Query Answering on Knowledge Graphs [51.33342412699939]
Knowledge Graph Query Embedding (KGQE) aims to embed First-Order Logic (FOL) queries in a low-dimensional KG space for complex reasoning over incomplete KGs.
Recent studies integrate various external information (such as entity types and relation context) to better capture the logical semantics of FOL queries.
We propose an effective Query Instruction Parsing (QIPP) that captures latent query patterns from code-like query instructions.
arXiv Detail & Related papers (2024-10-27T03:18:52Z) - Improving Retrieval-augmented Text-to-SQL with AST-based Ranking and Schema Pruning [10.731045939849125]
We focus on Text-to- semantic parsing from the perspective of retrieval-augmented generation.
Motivated by challenges related to the size of commercial database schemata and the deployability of business intelligence solutions, we propose $textASTReS$ that dynamically retrieves input database information.
arXiv Detail & Related papers (2024-07-03T15:55:14Z) - A Mechanism for Sample-Efficient In-Context Learning for Sparse
Retrieval Tasks [29.764014766305174]
We show how a transformer model is able to perform ICL under reasonable assumptions on the pre-training process and the downstream tasks.
We establish that this entire procedure is implementable using the transformer mechanism.
arXiv Detail & Related papers (2023-05-26T15:49:43Z) - Recommender Systems with Generative Retrieval [58.454606442670034]
We propose a novel generative retrieval approach, where the retrieval model autoregressively decodes the identifiers of the target candidates.
To that end, we create semantically meaningful of codewords to serve as a Semantic ID for each item.
We show that recommender systems trained with the proposed paradigm significantly outperform the current SOTA models on various datasets.
arXiv Detail & Related papers (2023-05-08T21:48:17Z) - Generate-and-Retrieve: use your predictions to improve retrieval for
semantic parsing [25.725176422936766]
We propose GandR, a retrieval procedure that retrieves exemplars for which outputs are also similar.
GandR first generates a preliminary prediction with input-based retrieval.
Then, it retrieves exemplars with outputs similar to the preliminary prediction which are used to generate a final prediction.
arXiv Detail & Related papers (2022-09-29T16:03:29Z) - UnifieR: A Unified Retriever for Large-Scale Retrieval [84.61239936314597]
Large-scale retrieval is to recall relevant documents from a huge collection given a query.
Recent retrieval methods based on pre-trained language models (PLM) can be coarsely categorized into either dense-vector or lexicon-based paradigms.
We propose a new learning framework, UnifieR which unifies dense-vector and lexicon-based retrieval in one model with a dual-representing capability.
arXiv Detail & Related papers (2022-05-23T11:01:59Z) - Autoregressive Search Engines: Generating Substrings as Document
Identifiers [53.0729058170278]
Autoregressive language models are emerging as the de-facto standard for generating answers.
Previous work has explored ways to partition the search space into hierarchical structures.
In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers.
arXiv Detail & Related papers (2022-04-22T10:45:01Z) - X2Parser: Cross-Lingual and Cross-Domain Framework for Task-Oriented
Compositional Semantic Parsing [51.81533991497547]
Task-oriented compositional semantic parsing (TCSP) handles complex nested user queries.
We present X2 compared a transferable Cross-lingual and Cross-domain for TCSP.
We propose to predict flattened intents and slots representations separately and cast both prediction tasks into sequence labeling problems.
arXiv Detail & Related papers (2021-06-07T16:40:05Z) - Coarse-to-Fine Memory Matching for Joint Retrieval and Classification [0.7081604594416339]
We present a novel end-to-end language model for joint retrieval and classification.
We evaluate it on the standard blind test set of the FEVER fact verification dataset.
We extend exemplar auditing to this setting for analyzing and constraining the model.
arXiv Detail & Related papers (2020-11-29T05:06:03Z) - Explicitly Modeling Syntax in Language Models with Incremental Parsing
and a Dynamic Oracle [88.65264818967489]
We propose a new syntax-aware language model: Syntactic Ordered Memory (SOM)
The model explicitly models the structure with an incremental and maintains the conditional probability setting of a standard language model.
Experiments show that SOM can achieve strong results in language modeling, incremental parsing and syntactic generalization tests.
arXiv Detail & Related papers (2020-10-21T17:39:15Z) - Message Passing Query Embedding [4.035753155957698]
We propose a graph neural network to encode a graph representation of a query.
We show that the model learns entity embeddings that capture the notion of entity type without explicit supervision.
arXiv Detail & Related papers (2020-02-06T17:40:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.