Feather-SQL: A Lightweight NL2SQL Framework with Dual-Model Collaboration Paradigm for Small Language Models
- URL: http://arxiv.org/abs/2503.17811v1
- Date: Sat, 22 Mar 2025 16:22:53 GMT
- Title: Feather-SQL: A Lightweight NL2SQL Framework with Dual-Model Collaboration Paradigm for Small Language Models
- Authors: Wenqi Pei, Hailing Xu, Hengyuan Zhao, Shizheng Hou, Han Chen, Zining Zhang, Pingyi Luo, Bingsheng He,
- Abstract summary: Small language models (SLMs) struggle with NL2 tasks, exhibiting poor performance and incompatibility with existing frameworks.<n>We introduce Feather- Paradigm, a new lightweight framework tailored for SLMs.<n>The proposed paradigm raises the accuracy ceiling of SLMs to 54.76%, highlighting its effectiveness.
- Score: 22.960560371494832
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Natural Language to SQL (NL2SQL) has seen significant advancements with large language models (LLMs). However, these models often depend on closed-source systems and high computational resources, posing challenges in data privacy and deployment. In contrast, small language models (SLMs) struggle with NL2SQL tasks, exhibiting poor performance and incompatibility with existing frameworks. To address these issues, we introduce Feather-SQL, a new lightweight framework tailored for SLMs. Feather-SQL improves SQL executability and accuracy through 1) schema pruning and linking, 2) multi-path and multi-candidate generation. Additionally, we introduce the 1+1 Model Collaboration Paradigm, which pairs a strong general-purpose chat model with a fine-tuned SQL specialist, combining strong analytical reasoning with high-precision SQL generation. Experimental results on BIRD demonstrate that Feather-SQL improves NL2SQL performance on SLMs, with around 10% boost for models without fine-tuning. The proposed paradigm raises the accuracy ceiling of SLMs to 54.76%, highlighting its effectiveness.
Related papers
- Spider 2.0: Evaluating Language Models on Real-World Enterprise Text-to-SQL Workflows [64.94146689665628]
Spider 2.0 is an evaluation framework for real-world text-to-sql problems derived from enterprise-level database use cases.<n>The databases in Spider 2.0 are sourced from real data applications, often containing over 1,000 columns and stored in local or cloud database systems such as BigQuery and Snowflake.<n>We show that solving problems in Spider 2.0 frequently requires understanding and searching through database metadata, dialect documentation, and even project-levels.
arXiv Detail & Related papers (2024-11-12T12:52:17Z) - RSL-SQL: Robust Schema Linking in Text-to-SQL Generation [51.00761167842468]
We propose a novel framework called RSL- that combines bidirectional schema linking, contextual information augmentation, binary selection strategy, and multi-turn self-correction.
benchmarks demonstrate that our approach achieves SOTA execution accuracy among open-source solutions, with 67.2% on BIRD and 87.9% on GPT-4ocorrection.
Our approach outperforms a series of GPT-4 based Text-to-Seek systems when adopting DeepSeek (much cheaper) with same intact prompts.
arXiv Detail & Related papers (2024-10-31T16:22:26Z) - MSc-SQL: Multi-Sample Critiquing Small Language Models For Text-To-SQL Translation [10.205010004198757]
Text-to-generation enables non-experts to interact with databases via natural language.<n>Recent advances on large closed-source models like GPT-4 present challenges in accessibility, privacy, and latency.<n>We focus on developing small, efficient, and open-source text-to-generation models.
arXiv Detail & Related papers (2024-10-16T18:03:24Z) - LR-SQL: A Supervised Fine-Tuning Method for Text2SQL Tasks under Low-Resource Scenarios [1.4387218083918762]
Large language models revolutionize Text2 through supervised fine-tuning.
Yet a crucial limitation is overlooked: the complexity of databases leads to an increased context length.
We propose LR-Thought to reduce total GPU memory usage by 40% compared to existing fine-tuning methods.
arXiv Detail & Related papers (2024-10-15T10:02:55Z) - RB-SQL: A Retrieval-based LLM Framework for Text-to-SQL [48.516004807486745]
Large language models (LLMs) with in-context learning have significantly improved the performance of text-to- task.
We propose RB-, a novel retrieval-based framework for in-context prompt engineering.
Experiment results demonstrate that our model achieves better performance than several competitive baselines on public datasets BIRD and Spider.
arXiv Detail & Related papers (2024-07-11T08:19:58Z) - TrustSQL: Benchmarking Text-to-SQL Reliability with Penalty-Based Scoring [11.78795632771211]
We introduce a novel benchmark designed to evaluate text-to- reliability as a model's ability to correctly handle any type of input question.
We evaluate existing methods using a novel penalty-based scoring metric with two modeling approaches.
arXiv Detail & Related papers (2024-03-23T16:12:52Z) - Blar-SQL: Faster, Stronger, Smaller NL2SQL [0.0]
We show how task decomposition can greatly benefit Large Language Models (LLMs) in database understanding and query generation.
We propose a new framework to divide the schema into chunks in order to fit more information into a limited context.
Our results are comparable with those obtained by GPT-4 at the same time being 135 times smaller, 90 times faster and more than 100 times cheaper than GPT-4.
arXiv Detail & Related papers (2024-01-04T16:50:52Z) - SQLPrompt: In-Context Text-to-SQL with Minimal Labeled Data [54.69489315952524]
"Prompt" is designed to improve the few-shot prompting capabilities of Text-to-LLMs.
"Prompt" outperforms previous approaches for in-context learning with few labeled data by a large margin.
We show that emphPrompt outperforms previous approaches for in-context learning with few labeled data by a large margin.
arXiv Detail & Related papers (2023-11-06T05:24:06Z) - Interleaving Pre-Trained Language Models and Large Language Models for
Zero-Shot NL2SQL Generation [23.519727682763644]
ZeroNL2 is crucial in achieving natural language tosql that is adaptive to new environments.
Existing approaches either fine-tune pretrained language models (PLMs) based on data or use prompts to guide fixed large language models (LLMs) such as ChatGPT.
We propose a ZeroNL2 framework that combines the complementary advantages of PLMs and LLMs for supporting zero-shot NL2.
arXiv Detail & Related papers (2023-06-15T06:50:51Z) - SQL-PaLM: Improved Large Language Model Adaptation for Text-to-SQL (extended) [53.95151604061761]
This paper introduces the framework for enhancing Text-to- filtering using large language models (LLMs)
With few-shot prompting, we explore the effectiveness of consistency decoding with execution-based error analyses.
With instruction fine-tuning, we delve deep in understanding the critical paradigms that influence the performance of tuned LLMs.
arXiv Detail & Related papers (2023-05-26T21:39:05Z) - Weakly Supervised Text-to-SQL Parsing through Question Decomposition [53.22128541030441]
We take advantage of the recently proposed question meaning representation called QDMR.
Given questions, their QDMR structures (annotated by non-experts or automatically predicted) and the answers, we are able to automatically synthesizesql queries.
Our results show that the weakly supervised models perform competitively with those trained on NL- benchmark data.
arXiv Detail & Related papers (2021-12-12T20:02:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.