GLaPE: Gold Label-agnostic Prompt Evaluation and Optimization for Large Language Model
- URL: http://arxiv.org/abs/2402.02408v2
- Date: Mon, 02 Dec 2024 07:47:00 GMT
- Title: GLaPE: Gold Label-agnostic Prompt Evaluation and Optimization for Large Language Model
- Authors: Xuanchang Zhang, Zhuosheng Zhang, Hai Zhao,
- Abstract summary: We propose a gold label-agnostic prompt evaluation (GLaPE) to alleviate dependence on gold labels.<n>We show that GLaPE provides reliable evaluations with accuracy, even in the absence of gold labels.<n>On six popular reasoning tasks, our GLaPE-based prompt optimization yields effective prompts comparable to accuracy-based ones.
- Score: 59.495717939664246
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the rapid progress of large language models (LLMs), their task performance remains sensitive to prompt design. Recent studies have explored leveraging the LLM itself as an optimizer to identify optimal prompts that maximize task accuracy. However, when evaluating prompts, such approaches heavily rely on elusive manually annotated gold labels to calculate task accuracy for each candidate prompt, which hinders the widespread implementation and generality. To overcome the limitation, this work proposes a gold label-agnostic prompt evaluation (GLaPE) to alleviate dependence on gold labels. Motivated by the observed correlation between self-consistency and the accuracy of the answer, we adopt self-consistency as the initial evaluation score. Subsequently, we refine the scores of prompts producing identical answers to be mutually consistent. Experimental results show that GLaPE provides reliable evaluations uniform with accuracy, even in the absence of gold labels. Moreover, on six popular reasoning tasks, our GLaPE-based prompt optimization yields effective prompts comparable to accuracy-based ones. The code is publicly available at https://github.com/thunderous77/GLaPE.
Related papers
- CompassVerifier: A Unified and Robust Verifier for LLMs Evaluation and Outcome Reward [50.97588334916863]
We develop CompassVerifier, an accurate and robust lightweight verifier model for evaluation and outcome reward.<n>It demonstrates multi-domain competency spanning math, knowledge, and diverse reasoning tasks, with the capability to process various answer types.<n>We introduce VerifierBench benchmark comprising model outputs collected from multiple data sources, augmented through manual analysis of metaerror patterns to enhance CompassVerifier.
arXiv Detail & Related papers (2025-08-05T17:55:24Z) - Reward Models Enable Scalable Code Verification by Trading Accuracy for Throughput [21.59519440154879]
We show that an outcome reward model (ORM) plays a crucial role in scaling verification through trading accuracy for speed.<n>We analyze the generate-prune-then-rank approach and show that it works by filtering out incorrect but highly ranked solutions.
arXiv Detail & Related papers (2025-06-11T17:58:21Z) - ReliableEval: A Recipe for Stochastic LLM Evaluation via Method of Moments [21.37415398600286]
We argue for a method of moments evaluation over the space of meaning-preserving prompt perturbations.<n>We show that even top-performing models like GPT-4o and Claude-3.7-Sonnet exhibit substantial prompt sensitivity.
arXiv Detail & Related papers (2025-05-28T09:40:48Z) - Search-Based Correction of Reasoning Chains for Language Models [72.61861891295302]
Chain-of-Thought (CoT) reasoning has advanced the capabilities and transparency of language models (LMs)<n>We introduce a new self-correction framework that augments each reasoning step in a CoT with a latent variable indicating its veracity.<n>We also introduce Search Corrector, a discrete search algorithm over-valued veracity assignments.
arXiv Detail & Related papers (2025-05-17T04:16:36Z) - Beyond the Singular: The Essential Role of Multiple Generations in Effective Benchmark Evaluation and Analysis [10.133537818749291]
Large language models (LLMs) have demonstrated significant utilities in real-world applications.
Benchmark evaluations are crucial for assessing the capabilities of LLMs.
arXiv Detail & Related papers (2025-02-13T03:43:33Z) - A LLM-Powered Automatic Grading Framework with Human-Level Guidelines Optimization [31.722907135361492]
Open-ended short-answer questions (SAGs) have been widely recognized as a powerful tool for providing deeper insights into learners' responses in the context of learning analytics (LA)
SAGs often present challenges in practice due to the high grading workload and concerns about inconsistent assessments.
We propose a unified multi-agent ASAG framework, GradeOpt, which leverages large language models (LLMs) as graders for SAGs.
arXiv Detail & Related papers (2024-10-03T03:11:24Z) - Integrative Decoding: Improve Factuality via Implicit Self-consistency [45.27124252002816]
Self-consistency-based approaches are remarkably effective in improving the factual accuracy of large language models.
We present Integrative Decoding (ID), to unlock the potential of self-consistency in open-ended generation tasks.
arXiv Detail & Related papers (2024-10-02T13:52:55Z) - On the Worst Prompt Performance of Large Language Models [93.13542053835542]
Performance of large language models (LLMs) is acutely sensitive to the phrasing of prompts.
We introduce RobustAlpacaEval, a new benchmark that consists of semantically equivalent case-level queries.
Experiments on RobustAlpacaEval with ChatGPT and six open-source LLMs from the Llama, Mistral, and Gemma families uncover substantial variability in model performance.
arXiv Detail & Related papers (2024-06-08T13:40:38Z) - Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction [56.17020601803071]
Recent research shows that pre-trained language models (PLMs) suffer from "prompt bias" in factual knowledge extraction.
This paper aims to improve the reliability of existing benchmarks by thoroughly investigating and mitigating prompt bias.
arXiv Detail & Related papers (2024-03-15T02:04:35Z) - Test-Time Personalization with Meta Prompt for Gaze Estimation [23.01057994927244]
We take inspiration from the recent advances in Natural Language Processing (NLP) by updating a negligible number of parameters, "prompts", at the test time.
We propose to meta-learn the prompt to ensure that its updates align with the goal.
Our experiments show that the meta-learned prompt can be effectively adapted even with a simple symmetry loss.
arXiv Detail & Related papers (2024-01-03T07:02:35Z) - Self-Evaluation Improves Selective Generation in Large Language Models [54.003992911447696]
We reformulate open-ended generation tasks into token-level prediction tasks.
We instruct an LLM to self-evaluate its answers.
We benchmark a range of scoring methods based on self-evaluation.
arXiv Detail & Related papers (2023-12-14T19:09:22Z) - Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good
movie, and a good prompt too? [84.91689960190054]
Large language models can perform new tasks in a zero-shot fashion, given natural language prompts.
It is underexplored what factors make the prompts effective, especially when the prompts are natural language.
arXiv Detail & Related papers (2022-12-20T18:47:13Z) - ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning [63.77667876176978]
Large language models show improved downstream task interpretability when prompted to generate step-by-step reasoning to justify their final answers.
These reasoning steps greatly improve model interpretability and verification, but objectively studying their correctness is difficult.
We present ROS, a suite of interpretable, unsupervised automatic scores that improve and extend previous text generation evaluation metrics.
arXiv Detail & Related papers (2022-12-15T15:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.