T5-SR: A Unified Seq-to-Seq Decoding Strategy for Semantic Parsing
- URL: http://arxiv.org/abs/2306.08368v1
- Date: Wed, 14 Jun 2023 08:57:13 GMT
- Title: T5-SR: A Unified Seq-to-Seq Decoding Strategy for Semantic Parsing
- Authors: Yuntao Li and Zhenpeng Su and Yutian Li and Hanchu Zhang and Sirui
Wang and Wei Wu and Yan Zhang
- Abstract summary: seq2seq semantics face much more challenges, including poor quality on schematical information prediction.
This paper proposes a seq2seq-oriented decoding strategy called SR, which includes a new intermediate representation S and a reranking method with score re-estimator.
- Score: 8.363108209152111
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Translating natural language queries into SQLs in a seq2seq manner has
attracted much attention recently. However, compared with
abstract-syntactic-tree-based SQL generation, seq2seq semantic parsers face
much more challenges, including poor quality on schematical information
prediction and poor semantic coherence between natural language queries and
SQLs. This paper analyses the above difficulties and proposes a
seq2seq-oriented decoding strategy called SR, which includes a new intermediate
representation SSQL and a reranking method with score re-estimator to solve the
above obstacles respectively. Experimental results demonstrate the
effectiveness of our proposed techniques and T5-SR-3b achieves new
state-of-the-art results on the Spider dataset.
Related papers
- H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables [56.73919743039263]
This paper introduces a novel algorithm that integrates both symbolic and semantic (textual) approaches in a two-stage process to address limitations.
Our experiments demonstrate that H-STAR significantly outperforms state-of-the-art methods across three question-answering (QA) and fact-verification datasets.
arXiv Detail & Related papers (2024-06-29T21:24:19Z) - Semantic Decomposition of Question and SQL for Text-to-SQL Parsing [2.684900573255764]
We propose a new modular Query Plan Language (QPL) that systematically decomposessql queries into simple and regular sub-queries.
Experimental results demonstrate that QPL is more effective than text-to-QPL for semantically equivalent queries.
arXiv Detail & Related papers (2023-10-20T15:13:34Z) - Correcting Semantic Parses with Natural Language through Dynamic Schema
Encoding [0.06445605125467573]
We show that the accuracy of autoregressive decoders can be boosted by up to 26% with only one turn of correction with natural language.
A Tbase model is capable of correcting the errors of a T5-large model in a zero-shot, cross-parser setting.
arXiv Detail & Related papers (2023-05-31T16:01:57Z) - HPE:Answering Complex Questions over Text by Hybrid Question Parsing and
Execution [92.69684305578957]
We propose a framework of question parsing and execution on textual QA.
The proposed framework can be viewed as a top-down question parsing followed by a bottom-up answer backtracking.
Our experiments on MuSiQue, 2WikiQA, HotpotQA, and NQ show that the proposed parsing and hybrid execution framework outperforms existing approaches in supervised, few-shot, and zero-shot settings.
arXiv Detail & Related papers (2023-05-12T22:37:06Z) - Conversational Text-to-SQL: An Odyssey into State-of-the-Art and
Challenges Ahead [6.966624873109535]
State-of-the-art (SOTA) systems use large, pre-trained and finetuned language models, such as the T5-family.
With multi-tasking (MT) over coherent tasks with discrete prompts during training, we improve over specialized text-to-three models.
We conduct studies to tease apart errors attributable to domain and compositional generalization.
arXiv Detail & Related papers (2023-02-21T23:15:33Z) - Importance of Synthesizing High-quality Data for Text-to-SQL Parsing [71.02856634369174]
State-of-the-art text-to-weighted algorithms did not further improve on popular benchmarks when trained with augmented synthetic data.
We propose a novel framework that incorporates key relationships from schema, imposes strong typing, and schema-weighted column sampling.
arXiv Detail & Related papers (2022-12-17T02:53:21Z) - Improving Text-to-SQL Semantic Parsing with Fine-grained Query
Understanding [84.04706075621013]
We present a general-purpose, modular neural semantic parsing framework based on token-level fine-grained query understanding.
Our framework consists of three modules: named entity recognizer (NER), neural entity linker (NEL) and neural entity linker (NSP)
arXiv Detail & Related papers (2022-09-28T21:00:30Z) - SUN: Exploring Intrinsic Uncertainties in Text-to-SQL Parsers [61.48159785138462]
This paper aims to improve the performance of text-to-dependence by exploring the intrinsic uncertainties in the neural network based approaches (called SUN)
Extensive experiments on five benchmark datasets demonstrate that our method significantly outperforms competitors and achieves new state-of-the-art results.
arXiv Detail & Related papers (2022-09-14T06:27:51Z) - S$^2$SQL: Injecting Syntax to Question-Schema Interaction Graph Encoder
for Text-to-SQL Parsers [66.78665327694625]
We propose S$2$, injecting Syntax to question- encoder graph for Text-to- relational parsing.
We also employ the decoupling constraint to induce diverse edge embedding, which further improves the network's performance.
Experiments on the Spider and robustness setting Spider-Syn demonstrate that the proposed approach outperforms all existing methods when pre-training models are used.
arXiv Detail & Related papers (2022-03-14T09:49:15Z) - SeaD: End-to-end Text-to-SQL Generation with Schema-aware Denoising [7.127280935638075]
In text-to-seq task, seq-to-seq models often lead to sub-optimal performance due to limitations in their architecture.
We present a simple yet effective approach that adapts transformer-based seq-to-seq model to robust text-to- generation.
arXiv Detail & Related papers (2021-05-17T14:49:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.