Benchmarking the Text-to-SQL Capability of Large Language Models: A
Comprehensive Evaluation
- URL: http://arxiv.org/abs/2403.02951v2
- Date: Wed, 6 Mar 2024 08:43:17 GMT
- Title: Benchmarking the Text-to-SQL Capability of Large Language Models: A
Comprehensive Evaluation
- Authors: Bin Zhang, Yuxiao Ye, Guoqing Du, Xiaoru Hu, Zhishuai Li, Sun Yang,
Chi Harold Liu, Rui Zhao, Ziyue Li, Hangyu Mao
- Abstract summary: Large Language Models (LLMs) have emerged as a powerful tool in advancing the Text-to- task.
There is still no consensus on the optimal prompt templates and design frameworks.
Existing benchmarks inadequately explore the performance of LLMs across the various sub-tasks of the Text-to- process.
- Score: 33.41556606816004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) have emerged as a powerful tool in advancing the
Text-to-SQL task, significantly outperforming traditional methods.
Nevertheless, as a nascent research field, there is still no consensus on the
optimal prompt templates and design frameworks. Additionally, existing
benchmarks inadequately explore the performance of LLMs across the various
sub-tasks of the Text-to-SQL process, which hinders the assessment of LLMs'
cognitive capabilities and the optimization of LLM-based solutions. To address
the aforementioned issues, we firstly construct a new dataset designed to
mitigate the risk of overfitting in LLMs. Then we formulate five evaluation
tasks to comprehensively assess the performance of diverse methods across
various LLMs throughout the Text-to-SQL process.Our study highlights the
performance disparities among LLMs and proposes optimal in-context learning
solutions tailored to each task. These findings offer valuable insights for
enhancing the development of LLM-based Text-to-SQL systems.
Related papers
- Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? [54.667202878390526]
Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases.
We introduce LOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs' performance on in-context retrieval and reasoning.
Our findings reveal LCLMs' surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks.
arXiv Detail & Related papers (2024-06-19T00:28:58Z) - Meta Reasoning for Large Language Models [58.87183757029041]
We introduce Meta-Reasoning Prompting (MRP), a novel and efficient system prompting method for large language models (LLMs)
MRP guides LLMs to dynamically select and apply different reasoning methods based on the specific requirements of each task.
We evaluate the effectiveness of MRP through comprehensive benchmarks.
arXiv Detail & Related papers (2024-06-17T16:14:11Z) - Automated Commit Message Generation with Large Language Models: An Empirical Study and Beyond [24.151927600694066]
Commit Message Generation (CMG) approaches aim to automatically generate commit messages based on given code diffs.
This paper conducts the first comprehensive experiment to investigate how far we have been in applying Large Language Models (LLMs) to generate high-quality commit messages.
arXiv Detail & Related papers (2024-04-23T08:24:43Z) - PPTC-R benchmark: Towards Evaluating the Robustness of Large Language
Models for PowerPoint Task Completion [96.47420221442397]
We construct adversarial user instructions by attacking user instructions at sentence, semantic, and multi-language levels.
We test 3 closed-source and 4 open-source LLMs using a benchmark that incorporates robustness settings.
We find that GPT-4 exhibits the highest performance and strong robustness in our benchmark.
arXiv Detail & Related papers (2024-03-06T15:33:32Z) - Decomposition for Enhancing Attention: Improving LLM-based Text-to-SQL through Workflow Paradigm [19.06214756792692]
In-context learning of large-language models (LLMs) has achieved remarkable success in the field of natural language processing.
Case studies reveal that the single-step chain-of-thought approach faces challenges such as attention diffusion and inadequate performance in complex tasks like text-to-correction.
A workflow paradigm is proposed, aiming to enhance the attention and problem-solving scope of LLMs through decomposition.
arXiv Detail & Related papers (2024-02-16T13:24:05Z) - Benchmarking Generation and Evaluation Capabilities of Large Language Models for Instruction Controllable Summarization [132.25202059478065]
We benchmark large language models (LLMs) on instruction controllable text summarization.
Our study reveals that instruction controllable text summarization remains a challenging task for LLMs.
arXiv Detail & Related papers (2023-11-15T18:25:26Z) - Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation [76.76046657162306]
Large language models (LLMs) have emerged as a new paradigm for Text-to- task.
Large language models (LLMs) have emerged as a new paradigm for Text-to- task.
arXiv Detail & Related papers (2023-08-29T14:59:54Z) - SQL-PaLM: Improved Large Language Model Adaptation for Text-to-SQL (extended) [53.95151604061761]
This paper introduces the framework for enhancing Text-to- filtering using large language models (LLMs)
With few-shot prompting, we explore the effectiveness of consistency decoding with execution-based error analyses.
With instruction fine-tuning, we delve deep in understanding the critical paradigms that influence the performance of tuned LLMs.
arXiv Detail & Related papers (2023-05-26T21:39:05Z) - Enhancing Few-shot Text-to-SQL Capabilities of Large Language Models: A
Study on Prompt Design Strategies [20.15851744895469]
In-context learning (ICL) has emerged as a new approach to various natural language processing tasks.
In this paper, we aim to extend this method to question answering tasks that utilize structured knowledge sources.
arXiv Detail & Related papers (2023-05-21T22:44:25Z) - How to Prompt LLMs for Text-to-SQL: A Study in Zero-shot, Single-domain,
and Cross-domain Settings [12.288808992805494]
Large language models (LLMs) with in-context learning have demonstrated remarkable capability in the text-to- task.
Previous research has prompted LLMs with various demonstration-retrieval strategies and intermediate reasoning steps to enhance their performance.
arXiv Detail & Related papers (2023-05-19T17:43:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.