IGSQL: Database Schema Interaction Graph Based Neural Model for
Context-Dependent Text-to-SQL Generation
- URL: http://arxiv.org/abs/2011.05744v1
- Date: Wed, 11 Nov 2020 12:56:21 GMT
- Title: IGSQL: Database Schema Interaction Graph Based Neural Model for
Context-Dependent Text-to-SQL Generation
- Authors: Yitao Cai, Xiaojun Wan
- Abstract summary: We propose a database schema interaction graph encoder to utilize historicalal information of database schema items.
We evaluate our model on the benchmark SParC and Co datasets.
- Score: 61.09660709356527
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Context-dependent text-to-SQL task has drawn much attention in recent years.
Previous models on context-dependent text-to-SQL task only concentrate on
utilizing historical user inputs. In this work, in addition to using encoders
to capture historical information of user inputs, we propose a database schema
interaction graph encoder to utilize historicalal information of database
schema items. In decoding phase, we introduce a gate mechanism to weigh the
importance of different vocabularies and then make the prediction of SQL
tokens. We evaluate our model on the benchmark SParC and CoSQL datasets, which
are two large complex context-dependent cross-domain text-to-SQL datasets. Our
model outperforms previous state-of-the-art model by a large margin and
achieves new state-of-the-art results on the two datasets. The comparison and
ablation results demonstrate the efficacy of our model and the usefulness of
the database schema interaction graph encoder.
Related papers
- Augmenting Multi-Turn Text-to-SQL Datasets with Self-Play [46.07002748587857]
We explore augmenting the training datasets using self-play, which leverages contextual information to synthesize new interactions.
We find that self-play improves the accuracy of a strong baseline on SParC and Co, two widely used text-to-domain datasets.
arXiv Detail & Related papers (2022-10-21T16:40:07Z) - STAR: SQL Guided Pre-Training for Context-dependent Text-to-SQL Parsing [64.80483736666123]
We propose a novel pre-training framework STAR for context-dependent text-to- parsing.
In addition, we construct a large-scale context-dependent text-to-the-art conversation corpus to pre-train STAR.
Extensive experiments show that STAR achieves new state-of-the-art performance on two downstream benchmarks.
arXiv Detail & Related papers (2022-10-21T11:30:07Z) - HIE-SQL: History Information Enhanced Network for Context-Dependent
Text-to-SQL Semantic Parsing [1.343950231082215]
We propose a History Information Enhanced text-to-the-art model (HIE-) to exploit context-dependence information from both history utterances and the last predictedsql query.
We show our methods improve the performance of HIE- by a significant margin, which achieves new state-of-the-art results on the two context-dependent text-to-the-art benchmarks.
arXiv Detail & Related papers (2022-03-14T11:58:37Z) - S$^2$SQL: Injecting Syntax to Question-Schema Interaction Graph Encoder
for Text-to-SQL Parsers [66.78665327694625]
We propose S$2$, injecting Syntax to question- encoder graph for Text-to- relational parsing.
We also employ the decoupling constraint to induce diverse edge embedding, which further improves the network's performance.
Experiments on the Spider and robustness setting Spider-Syn demonstrate that the proposed approach outperforms all existing methods when pre-training models are used.
arXiv Detail & Related papers (2022-03-14T09:49:15Z) - SADGA: Structure-Aware Dual Graph Aggregation Network for Text-to-SQL [29.328698264910596]
One of the most challenging problems of Text-to-Graph is how to generalize the trained model to the unseen database schemas.
We propose a Structure-Aware Dual Graph Aggregation Network (SADGA) for cross-domain Text-to-Graph.
We achieve 3rd place on the challenging Text-to-Graph benchmark Spider at the time of writing.
arXiv Detail & Related papers (2021-11-01T01:50:28Z) - ShadowGNN: Graph Projection Neural Network for Text-to-SQL Parser [36.12921337235763]
We propose a new architecture, ShadowGNN, which processes schemas at abstract and semantic levels.
On the challenging Text-to-Spider benchmark, empirical results show that ShadowGNN outperforms state-of-the-art models.
arXiv Detail & Related papers (2021-04-10T05:48:28Z) - Learning Contextual Representations for Semantic Parsing with
Generation-Augmented Pre-Training [86.91380874390778]
We present Generation-Augmented Pre-training (GAP), that jointly learns representations of natural language utterances and table schemas by leveraging generation models to generate pre-train data.
Based on experimental results, neural semantics that leverage GAP MODEL obtain new state-of-the-art results on both SPIDER and CRITERIA-TO-generative benchmarks.
arXiv Detail & Related papers (2020-12-18T15:53:50Z) - GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing [117.98107557103877]
We present GraPPa, an effective pre-training approach for table semantic parsing.
We construct synthetic question-pairs over high-free tables via a synchronous context-free grammar.
To maintain the model's ability to represent real-world data, we also include masked language modeling.
arXiv Detail & Related papers (2020-09-29T08:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.