SeaD: End-to-end Text-to-SQL Generation with Schema-aware Denoising
- URL: http://arxiv.org/abs/2105.07911v1
- Date: Mon, 17 May 2021 14:49:54 GMT
- Title: SeaD: End-to-end Text-to-SQL Generation with Schema-aware Denoising
- Authors: Kuan Xuan, Yongbo Wang, Yongliang Wang, Zujie Wen, Yang Dong
- Abstract summary: In text-to-seq task, seq-to-seq models often lead to sub-optimal performance due to limitations in their architecture.
We present a simple yet effective approach that adapts transformer-based seq-to-seq model to robust text-to- generation.
- Score: 7.127280935638075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In text-to-SQL task, seq-to-seq models often lead to sub-optimal performance
due to limitations in their architecture. In this paper, we present a simple
yet effective approach that adapts transformer-based seq-to-seq model to robust
text-to-SQL generation. Instead of inducing constraint to decoder or reformat
the task as slot-filling, we propose to train seq-to-seq model with Schema
aware Denoising (SeaD), which consists of two denoising objectives that train
model to either recover input or predict output from two novel erosion and
shuffle noises. These denoising objectives acts as the auxiliary tasks for
better modeling the structural data in S2S generation. In addition, we improve
and propose a clause-sensitive execution guided (EG) decoding strategy to
overcome the limitation of EG decoding for generative model. The experiments
show that the proposed method improves the performance of seq-to-seq model in
both schema linking and grammar correctness and establishes new
state-of-the-art on WikiSQL benchmark. The results indicate that the capacity
of vanilla seq-to-seq architecture for text-to-SQL may have been
under-estimated.
Related papers
- Effective Instruction Parsing Plugin for Complex Logical Query Answering on Knowledge Graphs [51.33342412699939]
Knowledge Graph Query Embedding (KGQE) aims to embed First-Order Logic (FOL) queries in a low-dimensional KG space for complex reasoning over incomplete KGs.
Recent studies integrate various external information (such as entity types and relation context) to better capture the logical semantics of FOL queries.
We propose an effective Query Instruction Parsing (QIPP) that captures latent query patterns from code-like query instructions.
arXiv Detail & Related papers (2024-10-27T03:18:52Z) - Meta-DiffuB: A Contextualized Sequence-to-Sequence Text Diffusion Model with Meta-Exploration [53.63593099509471]
We propose a scheduler-exploiter S2S-Diffusion paradigm designed to overcome the limitations of existing S2S-Diffusion models.
We employ Meta-Exploration to train an additional scheduler model dedicated to scheduling contextualized noise for each sentence.
Our exploiter model, an S2S-Diffusion model, leverages the noise scheduled by our scheduler model for updating and generation.
arXiv Detail & Related papers (2024-10-17T04:06:02Z) - COrAL: Order-Agnostic Language Modeling for Efficient Iterative Refinement [80.18490952057125]
Iterative refinement has emerged as an effective paradigm for enhancing the capabilities of large language models (LLMs) on complex tasks.
We propose Context-Wise Order-Agnostic Language Modeling (COrAL) to overcome these challenges.
Our approach models multiple token dependencies within manageable context windows, enabling the model to perform iterative refinement internally.
arXiv Detail & Related papers (2024-10-12T23:56:19Z) - T5-SR: A Unified Seq-to-Seq Decoding Strategy for Semantic Parsing [8.363108209152111]
seq2seq semantics face much more challenges, including poor quality on schematical information prediction.
This paper proposes a seq2seq-oriented decoding strategy called SR, which includes a new intermediate representation S and a reranking method with score re-estimator.
arXiv Detail & Related papers (2023-06-14T08:57:13Z) - Hierarchical Phrase-based Sequence-to-Sequence Learning [94.10257313923478]
We describe a neural transducer that maintains the flexibility of standard sequence-to-sequence (seq2seq) models while incorporating hierarchical phrases as a source of inductive bias during training and as explicit constraints during inference.
Our approach trains two models: a discriminative derivation based on a bracketing grammar whose tree hierarchically aligns source and target phrases, and a neural seq2seq model that learns to translate the aligned phrases one-by-one.
arXiv Detail & Related papers (2022-11-15T05:22:40Z) - Text Generation with Text-Editing Models [78.03750739936956]
This tutorial provides a comprehensive overview of text-editing models and current state-of-the-art approaches.
We discuss challenges related to productionization and how these models can be used to mitigate hallucination and bias.
arXiv Detail & Related papers (2022-06-14T17:58:17Z) - Controllable Text Generation with Neurally-Decomposed Oracle [91.18959622763055]
We propose a framework to control auto-regressive generation models with NeurAlly-Decomposed Oracle (NADO)
We present a closed-form optimal solution to incorporate the token-level guidance into the base model for controllable generation.
arXiv Detail & Related papers (2022-05-27T20:17:53Z) - Tiny Neural Models for Seq2Seq [0.0]
We propose a projection based encoder-decoder model referred to as pQRNN-MAtt.
The resulting quantized models are less than 3.5MB in size and are well suited for on-device latency critical applications.
We show that on MTOP, a challenging multilingual semantic parsing dataset, the average model performance surpasses LSTM based seq2seq model that uses pre-trained embeddings despite being 85x smaller.
arXiv Detail & Related papers (2021-08-07T00:39:42Z) - MT-Teql: Evaluating and Augmenting Consistency of Text-to-SQL Models
with Metamorphic Testing [11.566463879334862]
We propose MT-Teql, a Metamorphic Testing-based framework for evaluating and augmenting the consistency of text-to-preserving models.
Our framework exposes thousands of prediction errors from SOTA models and enriches existing datasets by order of magnitude, eliminating over 40% inconsistency errors without compromising standard accuracy.
arXiv Detail & Related papers (2020-12-21T07:43:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.