Is Long Context All You Need? Leveraging LLM's Extended Context for NL2SQL
- URL: http://arxiv.org/abs/2501.12372v3
- Date: Thu, 13 Feb 2025 23:39:12 GMT
- Title: Is Long Context All You Need? Leveraging LLM's Extended Context for NL2SQL
- Authors: Yeounoh Chung, Gaurav T. Kakkar, Yu Gan, Brenton Milne, Fatma Ozcan,
- Abstract summary: Large Language Models (LLMs) have demonstrated impressive capabilities across a range of natural language processing tasks.
One approach to this semantic ambiguous problem is to provide more and sufficient contextual information.
We show that long context LLMs are robust and do not get lost in the extended contextual information.
- Score: 1.1694928565998557
- License:
- Abstract: Large Language Models (LLMs) have demonstrated impressive capabilities across a range of natural language processing tasks. In particular, improvements in reasoning abilities and the expansion of context windows have opened new avenues for leveraging these powerful models. NL2SQL is challenging in that the natural language question is inherently ambiguous, while the SQL generation requires a precise understanding of complex data schema and semantics. One approach to this semantic ambiguous problem is to provide more and sufficient contextual information. In this work, we explore the performance and the latency trade-offs of the extended context window (a.k.a., long context) offered by Google's state-of-the-art LLM (\textit{gemini-1.5-pro}). We study the impact of various contextual information, including column example values, question and SQL query pairs, user-provided hints, SQL documentation, and schema. To the best of our knowledge, this is the first work to study how the extended context window and extra contextual information can help NL2SQL generation with respect to both accuracy and latency cost. We show that long context LLMs are robust and do not get lost in the extended contextual information. Additionally, our long-context NL2SQL pipeline based on Google's \textit{gemini-pro-1.5} achieve strong performances on various benchmark datasets without finetuning and expensive self-consistency based techniques.
Related papers
- Semantic Captioning: Benchmark Dataset and Graph-Aware Few-Shot In-Context Learning for SQL2Text [3.4688186440441893]
Large Language Models (LLMs) have demonstrated remarkable performance in various NLP tasks.
The reverse process, translating code into natural language, termed semantic captioning, has received less attention.
In this paper, we focus on the captioning ofsql query (2Text) to address the critical need for understanding and explaining queries.
arXiv Detail & Related papers (2025-01-06T17:36:09Z) - RB-SQL: A Retrieval-based LLM Framework for Text-to-SQL [48.516004807486745]
Large language models (LLMs) with in-context learning have significantly improved the performance of text-to- task.
We propose RB-, a novel retrieval-based framework for in-context prompt engineering.
Experiment results demonstrate that our model achieves better performance than several competitive baselines on public datasets BIRD and Spider.
arXiv Detail & Related papers (2024-07-11T08:19:58Z) - CoE-SQL: In-Context Learning for Multi-Turn Text-to-SQL with Chain-of-Editions [22.493487741249716]
Large Language Models (LLMs) have been demonstrated to possess impressive capabilities in a variety of domains and tasks.
We investigate the issue of prompt design in the multi-turn text-to- task and attempt to enhance the LLMs' reasoning capacity.
arXiv Detail & Related papers (2024-05-04T16:56:14Z) - Blar-SQL: Faster, Stronger, Smaller NL2SQL [0.0]
We show how task decomposition can greatly benefit Large Language Models (LLMs) in database understanding and query generation.
We propose a new framework to divide the schema into chunks in order to fit more information into a limited context.
Our results are comparable with those obtained by GPT-4 at the same time being 135 times smaller, 90 times faster and more than 100 times cheaper than GPT-4.
arXiv Detail & Related papers (2024-01-04T16:50:52Z) - SQL-PaLM: Improved Large Language Model Adaptation for Text-to-SQL (extended) [53.95151604061761]
This paper introduces the framework for enhancing Text-to- filtering using large language models (LLMs)
With few-shot prompting, we explore the effectiveness of consistency decoding with execution-based error analyses.
With instruction fine-tuning, we delve deep in understanding the critical paradigms that influence the performance of tuned LLMs.
arXiv Detail & Related papers (2023-05-26T21:39:05Z) - UNITE: A Unified Benchmark for Text-to-SQL Evaluation [72.72040379293718]
We introduce a UNIfied benchmark for Text-to-domain systems.
It is composed of publicly available text-to-domain datasets and 29K databases.
Compared to the widely used Spider benchmark, we introduce a threefold increase in SQL patterns.
arXiv Detail & Related papers (2023-05-25T17:19:52Z) - QURG: Question Rewriting Guided Context-Dependent Text-to-SQL Semantic
Parsing [46.05006486399823]
This paper presents QURG, a novel Question Rewriting Guided approach to help the models achieve adequate contextual understanding.
We first train a question rewriting model to complete the current question based on question context, and convert them into a rewriting edit matrix.
We further design a two-stream matrix encoder to jointly model rewriting relations between question and context, and the schema linking relations between natural language and structured schema.
arXiv Detail & Related papers (2023-05-11T08:45:55Z) - HIE-SQL: History Information Enhanced Network for Context-Dependent
Text-to-SQL Semantic Parsing [1.343950231082215]
We propose a History Information Enhanced text-to-the-art model (HIE-) to exploit context-dependence information from both history utterances and the last predictedsql query.
We show our methods improve the performance of HIE- by a significant margin, which achieves new state-of-the-art results on the two context-dependent text-to-the-art benchmarks.
arXiv Detail & Related papers (2022-03-14T11:58:37Z) - Weakly Supervised Text-to-SQL Parsing through Question Decomposition [53.22128541030441]
We take advantage of the recently proposed question meaning representation called QDMR.
Given questions, their QDMR structures (annotated by non-experts or automatically predicted) and the answers, we are able to automatically synthesizesql queries.
Our results show that the weakly supervised models perform competitively with those trained on NL- benchmark data.
arXiv Detail & Related papers (2021-12-12T20:02:42Z) - Dual Reader-Parser on Hybrid Textual and Tabular Evidence for Open
Domain Question Answering [78.9863753810787]
A large amount of world's knowledge is stored in structured databases.
query languages can answer questions that require complex reasoning, as well as offering full explainability.
arXiv Detail & Related papers (2021-08-05T22:04:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.