Correcting Semantic Parses with Natural Language through Dynamic Schema
Encoding
- URL: http://arxiv.org/abs/2305.19974v1
- Date: Wed, 31 May 2023 16:01:57 GMT
- Title: Correcting Semantic Parses with Natural Language through Dynamic Schema
Encoding
- Authors: Parker Glenn, Parag Pravin Dakle, Preethi Raghavan
- Abstract summary: We show that the accuracy of autoregressive decoders can be boosted by up to 26% with only one turn of correction with natural language.
A Tbase model is capable of correcting the errors of a T5-large model in a zero-shot, cross-parser setting.
- Score: 0.06445605125467573
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In addressing the task of converting natural language to SQL queries, there
are several semantic and syntactic challenges. It becomes increasingly
important to understand and remedy the points of failure as the performance of
semantic parsing systems improve. We explore semantic parse correction with
natural language feedback, proposing a new solution built on the success of
autoregressive decoders in text-to-SQL tasks. By separating the semantic and
syntactic difficulties of the task, we show that the accuracy of text-to-SQL
parsers can be boosted by up to 26% with only one turn of correction with
natural language. Additionally, we show that a T5-base model is capable of
correcting the errors of a T5-large model in a zero-shot, cross-parser setting.
Related papers
- MrT5: Dynamic Token Merging for Efficient Byte-level Language Models [50.46453950887946]
This work introduces MrT5 (MergeT5), a more efficient variant of ByT5.
MrT5 integrates a token deletion mechanism in its encoder to dynamically shorten the input sequence length.
When trained on English text, MrT5 demonstrates the capability to transfer its deletion feature zero-shot across several languages.
arXiv Detail & Related papers (2024-10-28T06:14:12Z) - T5-SR: A Unified Seq-to-Seq Decoding Strategy for Semantic Parsing [8.363108209152111]
seq2seq semantics face much more challenges, including poor quality on schematical information prediction.
This paper proposes a seq2seq-oriented decoding strategy called SR, which includes a new intermediate representation S and a reranking method with score re-estimator.
arXiv Detail & Related papers (2023-06-14T08:57:13Z) - Towards preserving word order importance through Forced Invalidation [80.33036864442182]
We show that pre-trained language models are insensitive to word order.
We propose Forced Invalidation to help preserve the importance of word order.
Our experiments demonstrate that Forced Invalidation significantly improves the sensitivity of the models to word order.
arXiv Detail & Related papers (2023-04-11T13:42:10Z) - Conversational Text-to-SQL: An Odyssey into State-of-the-Art and
Challenges Ahead [6.966624873109535]
State-of-the-art (SOTA) systems use large, pre-trained and finetuned language models, such as the T5-family.
With multi-tasking (MT) over coherent tasks with discrete prompts during training, we improve over specialized text-to-three models.
We conduct studies to tease apart errors attributable to domain and compositional generalization.
arXiv Detail & Related papers (2023-02-21T23:15:33Z) - Graphix-T5: Mixing Pre-Trained Transformers with Graph-Aware Layers for
Text-to-SQL Parsing [56.232873134174056]
One of the major challenges in text-to-text parsing is domain generalization, i.e., how to well generalize to unseen databases.
In this work, we explore ways to further augment the pre-trained text-to-text transformer model with specialized components for text-to-text parsing.
To this end, we propose a new architecture GRAPHIX-T5, augmented by some specially-designed graph-aware model with layers.
arXiv Detail & Related papers (2023-01-18T13:29:05Z) - MIGA: A Unified Multi-task Generation Framework for Conversational
Text-to-SQL [48.34333725045152]
Most state-of-the-art conversational text-to-generative methods are incompatible with pre-trained language models (PLMs), such as T5.
We present a two-stage unified MultI-task Generation frAmeme (MIGA) that leverages PLMs' ability to tackle conversational text-to-work.
arXiv Detail & Related papers (2022-12-19T07:14:32Z) - SUN: Exploring Intrinsic Uncertainties in Text-to-SQL Parsers [61.48159785138462]
This paper aims to improve the performance of text-to-dependence by exploring the intrinsic uncertainties in the neural network based approaches (called SUN)
Extensive experiments on five benchmark datasets demonstrate that our method significantly outperforms competitors and achieves new state-of-the-art results.
arXiv Detail & Related papers (2022-09-14T06:27:51Z) - S$^2$SQL: Injecting Syntax to Question-Schema Interaction Graph Encoder
for Text-to-SQL Parsers [66.78665327694625]
We propose S$2$, injecting Syntax to question- encoder graph for Text-to- relational parsing.
We also employ the decoupling constraint to induce diverse edge embedding, which further improves the network's performance.
Experiments on the Spider and robustness setting Spider-Syn demonstrate that the proposed approach outperforms all existing methods when pre-training models are used.
arXiv Detail & Related papers (2022-03-14T09:49:15Z) - NL-EDIT: Correcting semantic parse errors through natural language
interaction [28.333860779302306]
We present NL-EDIT, a model for interpreting natural language feedback in the interaction context.
We show that NL-EDIT can boost the accuracy of existing text-to-allies by up to 20% with only one turn of correction.
arXiv Detail & Related papers (2021-03-26T15:45:46Z) - MT-Teql: Evaluating and Augmenting Consistency of Text-to-SQL Models
with Metamorphic Testing [11.566463879334862]
We propose MT-Teql, a Metamorphic Testing-based framework for evaluating and augmenting the consistency of text-to-preserving models.
Our framework exposes thousands of prediction errors from SOTA models and enriches existing datasets by order of magnitude, eliminating over 40% inconsistency errors without compromising standard accuracy.
arXiv Detail & Related papers (2020-12-21T07:43:31Z) - Speak to your Parser: Interactive Text-to-SQL with Natural Language
Feedback [39.45695779589969]
We study the task of semantic parse correction with natural language feedback.
In this paper, we investigate a more interactive scenario where humans can further interact with the system.
arXiv Detail & Related papers (2020-05-05T23:58:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.