GLaPE: Gold Label-agnostic Prompt Evaluation and Optimization for Large
Language Model
- URL: http://arxiv.org/abs/2402.02408v1
- Date: Sun, 4 Feb 2024 08:57:54 GMT
- Title: GLaPE: Gold Label-agnostic Prompt Evaluation and Optimization for Large
Language Model
- Authors: Xuanchang Zhang, Zhuosheng Zhang, Hai Zhao
- Abstract summary: We propose a gold label-agnostic prompt evaluation (GLaPE) to alleviate dependence on gold labels.
We show that GLaPE provides reliable evaluations with accuracy, even in the absence of gold labels.
On six popular reasoning tasks, our GLaPE-based prompt optimization yields effective prompts comparable to accuracy-based ones.
- Score: 66.86722460851968
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the rapid progress of large language models (LLMs), their task
performance remains sensitive to prompt design. Recent studies have explored
leveraging the LLM itself as an optimizer to identify optimal prompts that
maximize task accuracy. However, when evaluating prompts, such approaches
heavily rely on elusive manually annotated gold labels to calculate task
accuracy for each candidate prompt, which hinders the widespread implementation
and generality. To overcome the limitation, this work proposes a gold
label-agnostic prompt evaluation (GLaPE) to alleviate dependence on gold
labels. Motivated by the observed correlation between self-consistency and the
accuracy of the answer, we adopt self-consistency as the initial evaluation
score. Subsequently, we refine the scores of prompts producing identical
answers to be mutually consistent. Experimental results show that GLaPE
provides reliable evaluations uniform with accuracy, even in the absence of
gold labels. Moreover, on six popular reasoning tasks, our GLaPE-based prompt
optimization yields effective prompts comparable to accuracy-based ones. The
code is publicly available at https://github.com/thunderous77/GLaPE.
Related papers
- A LLM-Powered Automatic Grading Framework with Human-Level Guidelines Optimization [31.722907135361492]
Open-ended short-answer questions (SAGs) have been widely recognized as a powerful tool for providing deeper insights into learners' responses in the context of learning analytics (LA)
SAGs often present challenges in practice due to the high grading workload and concerns about inconsistent assessments.
We propose a unified multi-agent ASAG framework, GradeOpt, which leverages large language models (LLMs) as graders for SAGs.
arXiv Detail & Related papers (2024-10-03T03:11:24Z) - Integrative Decoding: Improve Factuality via Implicit Self-consistency [45.27124252002816]
Self-consistency-based approaches are remarkably effective in improving the factual accuracy of large language models.
We present Integrative Decoding (ID), to unlock the potential of self-consistency in open-ended generation tasks.
arXiv Detail & Related papers (2024-10-02T13:52:55Z) - On the Worst Prompt Performance of Large Language Models [93.13542053835542]
Performance of large language models (LLMs) is acutely sensitive to the phrasing of prompts.
We introduce RobustAlpacaEval, a new benchmark that consists of semantically equivalent case-level queries.
Experiments on RobustAlpacaEval with ChatGPT and six open-source LLMs from the Llama, Mistral, and Gemma families uncover substantial variability in model performance.
arXiv Detail & Related papers (2024-06-08T13:40:38Z) - Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction [56.17020601803071]
Recent research shows that pre-trained language models (PLMs) suffer from "prompt bias" in factual knowledge extraction.
This paper aims to improve the reliability of existing benchmarks by thoroughly investigating and mitigating prompt bias.
arXiv Detail & Related papers (2024-03-15T02:04:35Z) - Test-Time Personalization with Meta Prompt for Gaze Estimation [23.01057994927244]
We take inspiration from the recent advances in Natural Language Processing (NLP) by updating a negligible number of parameters, "prompts", at the test time.
We propose to meta-learn the prompt to ensure that its updates align with the goal.
Our experiments show that the meta-learned prompt can be effectively adapted even with a simple symmetry loss.
arXiv Detail & Related papers (2024-01-03T07:02:35Z) - Self-Evaluation Improves Selective Generation in Large Language Models [54.003992911447696]
We reformulate open-ended generation tasks into token-level prediction tasks.
We instruct an LLM to self-evaluate its answers.
We benchmark a range of scoring methods based on self-evaluation.
arXiv Detail & Related papers (2023-12-14T19:09:22Z) - Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good
movie, and a good prompt too? [84.91689960190054]
Large language models can perform new tasks in a zero-shot fashion, given natural language prompts.
It is underexplored what factors make the prompts effective, especially when the prompts are natural language.
arXiv Detail & Related papers (2022-12-20T18:47:13Z) - ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning [63.77667876176978]
Large language models show improved downstream task interpretability when prompted to generate step-by-step reasoning to justify their final answers.
These reasoning steps greatly improve model interpretability and verification, but objectively studying their correctness is difficult.
We present ROS, a suite of interpretable, unsupervised automatic scores that improve and extend previous text generation evaluation metrics.
arXiv Detail & Related papers (2022-12-15T15:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.