Analyzing the Effectiveness of Large Language Models on Text-to-SQL
Synthesis
- URL: http://arxiv.org/abs/2401.12379v1
- Date: Mon, 22 Jan 2024 22:05:42 GMT
- Title: Analyzing the Effectiveness of Large Language Models on Text-to-SQL
Synthesis
- Authors: Richard Roberson, Gowtham Kaki, Ashutosh Trivedi
- Abstract summary: This study investigates various approaches to using Large Language Models for Text-to- program synthesis.
The goal was to input a natural language question along with the database schema and output the correct SELECT query.
- Score: 4.412170175171256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study investigates various approaches to using Large Language Models
(LLMs) for Text-to-SQL program synthesis, focusing on the outcomes and insights
derived. Employing the popular Text-to-SQL dataset, spider, the goal was to
input a natural language question along with the database schema and output the
correct SQL SELECT query. The initial approach was to fine-tune a local and
open-source model to generate the SELECT query. After QLoRa fine-tuning
WizardLM's WizardCoder-15B model on the spider dataset, the execution accuracy
for generated queries rose to a high of 61%. With the second approach, using
the fine-tuned gpt-3.5-turbo-16k (Few-shot) + gpt-4-turbo (Zero-shot error
correction), the execution accuracy reached a high of 82.1%. Of all the
incorrect queries, most can be categorized into a seven different categories of
what went wrong: selecting the wrong columns or wrong order of columns,
grouping by the wrong column, predicting the wrong values in conditionals,
using different aggregates than the ground truth, extra or too few JOIN
clauses, inconsistencies in the Spider dataset, and lastly completely incorrect
query structure. Most if not all of the queries fall into these categories and
it is insightful to understanding where the faults still lie with LLM program
synthesis and where they can be improved.
Related papers
- Context-Aware SQL Error Correction Using Few-Shot Learning -- A Novel Approach Based on NLQ, Error, and SQL Similarity [0.0]
This paper introduces a novel few-shot learning-based approach for error correction insql generation.
It enhances the accuracy of generated queries by selecting the most suitable few-shot error correction examples for a given natural language question (NLQ)
In experiments with the open-source dataset, the proposed model offers a 39.2% increase in fixing errors with no error correction and a 10% increase from a simple error correction method.
arXiv Detail & Related papers (2024-10-11T18:22:08Z) - DataGpt-SQL-7B: An Open-Source Language Model for Text-to-SQL [7.76068876576964]
We propose a suite of compact, fine-tuned models and self-refine mechanisms to democratize data access and analysis for non-expert users.
Our system, DataGpt-sql, achieved 87.2% accuracy on the spider-dev.
arXiv Detail & Related papers (2024-09-24T11:38:08Z) - SelECT-SQL: Self-correcting ensemble Chain-of-Thought for Text-to-SQL [3.422309388045878]
We introduce SelECT-, a novel in-context learning solution that uses an algorithmic combination of chain-of-thought, self-correction, and ensemble methods.
Specifically, when configured using GPT as the base LLM, SelECT-Turbo achieves 84.2% execution accuracy on the Spider leaderboard's development set.
arXiv Detail & Related papers (2024-09-16T05:40:18Z) - DAC: Decomposed Automation Correction for Text-to-SQL [51.48239006107272]
We introduce De Automation Correction (DAC), which corrects text-to-composed by decomposing entity linking and skeleton parsing.
We show that our method improves performance by $3.7%$ on average of Spider, Bird, and KaggleDBQA compared with the baseline method.
arXiv Detail & Related papers (2024-08-16T14:43:15Z) - Fine-Tuning Language Models for Context-Specific SQL Query Generation [0.0]
This paper presents a novel approach to fine-tuning open-source large language models (LLMs) for the task of transforming natural language intosql queries.
We introduce models specialized in generatingsql queries, trained on synthetic datasets tailored to the Snowflake SQL and Google dialects.
Our methodology involves generating a context-specific dataset using GPT-4, then fine-tuning three open-source LLMs(Starcoder Plus, Code-Llama, and Mistral) employing the LoRa technique to optimize for resource constraints.
The fine-tuned models demonstrate superior performance in zero-shot settings compared to the baseline GP
arXiv Detail & Related papers (2023-12-04T18:04:27Z) - Benchmarking and Improving Text-to-SQL Generation under Ambiguity [25.283118418288293]
We develop a novel benchmark called AmbiQT where each text is interpretable as two plausible SQLs due to lexical and/or structural ambiguity.
We propose LogicalBeam, a new decoding algorithm that navigates thesql logic space using a blend of plan-based template generation and constrained infilling.
arXiv Detail & Related papers (2023-10-20T17:00:53Z) - SQL-PaLM: Improved Large Language Model Adaptation for Text-to-SQL (extended) [53.95151604061761]
This paper introduces the framework for enhancing Text-to- filtering using large language models (LLMs)
With few-shot prompting, we explore the effectiveness of consistency decoding with execution-based error analyses.
With instruction fine-tuning, we delve deep in understanding the critical paradigms that influence the performance of tuned LLMs.
arXiv Detail & Related papers (2023-05-26T21:39:05Z) - Wav2SQL: Direct Generalizable Speech-To-SQL Parsing [55.10009651476589]
Speech-to-Spider (S2Spider) aims to convert spoken questions intosql queries given databases.
We propose the first direct speech-to-speaker parsing model Wav2 which avoids error compounding across cascaded systems.
Experimental results demonstrate that Wav2 avoids error compounding and achieves state-of-the-art results by up to 2.5% accuracy improvement over the baseline.
arXiv Detail & Related papers (2023-05-21T19:26:46Z) - Improving Text-to-SQL Semantic Parsing with Fine-grained Query
Understanding [84.04706075621013]
We present a general-purpose, modular neural semantic parsing framework based on token-level fine-grained query understanding.
Our framework consists of three modules: named entity recognizer (NER), neural entity linker (NEL) and neural entity linker (NSP)
arXiv Detail & Related papers (2022-09-28T21:00:30Z) - S$^2$SQL: Injecting Syntax to Question-Schema Interaction Graph Encoder
for Text-to-SQL Parsers [66.78665327694625]
We propose S$2$, injecting Syntax to question- encoder graph for Text-to- relational parsing.
We also employ the decoupling constraint to induce diverse edge embedding, which further improves the network's performance.
Experiments on the Spider and robustness setting Spider-Syn demonstrate that the proposed approach outperforms all existing methods when pre-training models are used.
arXiv Detail & Related papers (2022-03-14T09:49:15Z) - Weakly Supervised Text-to-SQL Parsing through Question Decomposition [53.22128541030441]
We take advantage of the recently proposed question meaning representation called QDMR.
Given questions, their QDMR structures (annotated by non-experts or automatically predicted) and the answers, we are able to automatically synthesizesql queries.
Our results show that the weakly supervised models perform competitively with those trained on NL- benchmark data.
arXiv Detail & Related papers (2021-12-12T20:02:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.