Selection of Prompt Engineering Techniques for Code Generation through Predicting Code Complexity
- URL: http://arxiv.org/abs/2409.16416v1
- Date: Tue, 24 Sep 2024 19:28:55 GMT
- Title: Selection of Prompt Engineering Techniques for Code Generation through Predicting Code Complexity
- Authors: Chung-Yu Wang, Alireza DaghighFarsoodeh, Hung Viet Pham,
- Abstract summary: We propose PET-Select, a PET-agnostic selection model that uses code complexity as a proxy to classify queries.
PET-Select distinguishes between simple and complex problems, allowing it to choose PETs that are best suited for each query's complexity level.
Our evaluations on the MBPP and HumanEval benchmarks show up to a 1.9% improvement in pass@1 accuracy, along with a 74.8% reduction in token usage.
- Score: 2.576214343259399
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) have demonstrated impressive performance in software engineering tasks. However, improving their accuracy in generating correct and reliable code remains challenging. Numerous prompt engineering techniques (PETs) have been developed to address this, but no single approach is universally optimal. Selecting the right PET for each query is difficult for two primary reasons: (1) interactive prompting techniques may not consistently deliver the expected benefits, especially for simpler queries, and (2) current automated prompt engineering methods lack adaptability and fail to fully utilize multi-stage responses. To overcome these challenges, we propose PET-Select, a PET-agnostic selection model that uses code complexity as a proxy to classify queries and select the most appropriate PET. By incorporating contrastive learning, PET-Select effectively distinguishes between simple and complex problems, allowing it to choose PETs that are best suited for each query's complexity level. Our evaluations on the MBPP and HumanEval benchmarks using GPT-3.5 Turbo and GPT-4o show up to a 1.9% improvement in pass@1 accuracy, along with a 74.8% reduction in token usage. Additionally, we provide both quantitative and qualitative results to demonstrate how PET-Select effectively selects the most appropriate techniques for each code generation query, further showcasing its efficiency in optimizing PET selection.
Related papers
- UniPET-SPK: A Unified Framework for Parameter-Efficient Tuning of Pre-trained Speech Models for Robust Speaker Verification [32.3387409534726]
This study explores parameter-efficient tuning methods for large-scale pre-trained SSL speech models to speaker verification task.
We propose three PET methods: (i)an adapter-tuning method, (ii)a prompt-tuning method, and (iii)a unified framework that effectively incorporates adapter-tuning and prompt-tuning with a dynamically learnable gating mechanism.
The proposed UniPET-SPK learns to find the optimal mixture of PET methods to match different datasets and scenarios.
arXiv Detail & Related papers (2025-01-27T22:26:37Z) - Densely Connected Parameter-Efficient Tuning for Referring Image Segmentation [30.912818564963512]
DETRIS is a parameter-efficient tuning framework designed to enhance low-rank visual feature propagation.
Our simple yet efficient approach greatly surpasses state-of-the-art methods with 0.9% to 1.8% backbone parameter updates.
arXiv Detail & Related papers (2025-01-15T05:00:03Z) - HiDe-PET: Continual Learning via Hierarchical Decomposition of Parameter-Efficient Tuning [55.88910947643436]
We propose a unified framework for continual learning (CL) with pre-trained models (PTMs) and parameter-efficient tuning (PET)
We present Hierarchical Decomposition PET (HiDe-PET), an innovative approach that explicitly optimize the objective through incorporating task-specific and task-shared knowledge.
Our approach demonstrates remarkably superior performance over a broad spectrum of recent strong baselines.
arXiv Detail & Related papers (2024-07-07T01:50:25Z) - ConPET: Continual Parameter-Efficient Tuning for Large Language Models [65.48107393731861]
Continual learning requires continual adaptation of models to newly emerging tasks.
We propose Continual.
Efficient Tuning (ConPET), a generalizable paradigm for.
continual task adaptation of large language models.
arXiv Detail & Related papers (2023-09-26T08:52:04Z) - Exploring the Impact of Model Scaling on Parameter-Efficient Tuning [100.61202305296275]
Scaling-efficient tuning (PET) methods can effectively drive extremely large pre-trained language models (PLMs)
In small PLMs, there are usually noticeable performance differences among PET methods.
We introduce a more flexible PET method called Arbitrary PET (APET) method.
arXiv Detail & Related papers (2023-06-04T10:10:54Z) - A Unified Continual Learning Framework with General Parameter-Efficient
Tuning [56.250772378174446]
"Pre-training $rightarrow$ downstream adaptation" presents both new opportunities and challenges for Continual Learning.
We position prompting as one instantiation of PET, and propose a unified CL framework, dubbed as Learning-Accumulation-Ensemble (LAE)
PET, e.g., using Adapter, LoRA, or Prefix, can adapt a pre-trained model to downstream tasks with fewer parameters and resources.
arXiv Detail & Related papers (2023-03-17T15:52:45Z) - Sparse Structure Search for Parameter-Efficient Tuning [85.49094523664428]
We show that S$3$PET surpasses manual and random structures with less trainable parameters.
The searched structures preserve more than 99% fine-tuning performance with 0.01% trainable parameters.
arXiv Detail & Related papers (2022-06-15T08:45:21Z) - Improving and Simplifying Pattern Exploiting Training [81.77863825517511]
Pattern Exploiting Training (PET) is a recent approach that leverages patterns for few-shot learning.
In this paper, we focus on few shot learning without any unlabeled data and introduce ADAPET.
ADAPET outperforms PET on SuperGLUE without any task-specific unlabeled data.
arXiv Detail & Related papers (2021-03-22T15:52:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.