Prometheus: Inducing Fine-grained Evaluation Capability in Language
Models
- URL: http://arxiv.org/abs/2310.08491v2
- Date: Sat, 9 Mar 2024 10:44:58 GMT
- Title: Prometheus: Inducing Fine-grained Evaluation Capability in Language
Models
- Authors: Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang, Shayne Longpre, Hwaran
Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, Minjoon Seo
- Abstract summary: We propose Prometheus, a fully open-source Large Language Model (LLM) that is on par with GPT-4's evaluation capabilities.
Prometheus scores a Pearson correlation of 0.897 with human evaluators when evaluating with 45 customized score rubrics.
Prometheus achieves the highest accuracy on two human preference benchmarks.
- Score: 66.12432440863816
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, using a powerful proprietary Large Language Model (LLM) (e.g.,
GPT-4) as an evaluator for long-form responses has become the de facto
standard. However, for practitioners with large-scale evaluation tasks and
custom criteria in consideration (e.g., child-readability), using proprietary
LLMs as an evaluator is unreliable due to the closed-source nature,
uncontrolled versioning, and prohibitive costs. In this work, we propose
Prometheus, a fully open-source LLM that is on par with GPT-4's evaluation
capabilities when the appropriate reference materials (reference answer, score
rubric) are accompanied. We first construct the Feedback Collection, a new
dataset that consists of 1K fine-grained score rubrics, 20K instructions, and
100K responses and language feedback generated by GPT-4. Using the Feedback
Collection, we train Prometheus, a 13B evaluator LLM that can assess any given
long-form text based on customized score rubric provided by the user.
Experimental results show that Prometheus scores a Pearson correlation of 0.897
with human evaluators when evaluating with 45 customized score rubrics, which
is on par with GPT-4 (0.882), and greatly outperforms ChatGPT (0.392).
Furthermore, measuring correlation with GPT-4 with 1222 customized score
rubrics across four benchmarks (MT Bench, Vicuna Bench, Feedback Bench, Flask
Eval) shows similar trends, bolstering Prometheus's capability as an evaluator
LLM. Lastly, Prometheus achieves the highest accuracy on two human preference
benchmarks (HHH Alignment & MT Bench Human Judgment) compared to open-sourced
reward models explicitly trained on human preference datasets, highlighting its
potential as an universal reward model. We open-source our code, dataset, and
model at https://kaistai.github.io/prometheus/.
Related papers
- A Large-Scale Study of Relevance Assessments with Large Language Models: An Initial Look [52.114284476700874]
This paper reports on the results of a large-scale evaluation (the TREC 2024 RAG Track) where four different relevance assessment approaches were deployed.
We find that automatically generated UMBRELA judgments can replace fully manual judgments to accurately capture run-level effectiveness.
Surprisingly, we find that LLM assistance does not appear to increase correlation with fully manual assessments, suggesting that costs associated with human-in-the-loop processes do not bring obvious tangible benefits.
arXiv Detail & Related papers (2024-11-13T01:12:35Z) - A Fine-grained Sentiment Analysis of App Reviews using Large Language Models: An Evaluation Study [1.0787328610467801]
Large Language Models (LLMs) have shown impressive performance on several new tasks without updating the model's parameters.
This study compares the performance of state-of-the-art LLMs, including GPT-4, ChatGPT, and LLama-2-chat variants, for extracting app features.
Results indicate the best-performing GPT-4 model outperforms rule-based approaches by 23.6% in f1-score with zero-shot feature extraction.
arXiv Detail & Related papers (2024-09-11T10:21:13Z) - Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models [92.66784679667441]
Prometheus 2 is a more powerful evaluator LM that closely mirrors human and GPT-4 judgements.
It is capable of processing both direct assessment and pairwise ranking formats grouped with a user-defined evaluation criteria.
On four direct assessment benchmarks and four pairwise ranking benchmarks, Prometheus 2 scores the highest correlation and agreement with humans and proprietary LM judges.
arXiv Detail & Related papers (2024-05-02T17:59:35Z) - Evaluation Metrics in the Era of GPT-4: Reliably Evaluating Large
Language Models on Sequence to Sequence Tasks [9.801767683867125]
We provide a preliminary and hybrid evaluation on three NLP benchmarks using both automatic and human evaluation.
We find that ChatGPT consistently outperforms many other popular models according to human reviewers on the majority of metrics.
We also find that human reviewers rate the gold reference as much worse than the best models' outputs, indicating the poor quality of many popular benchmarks.
arXiv Detail & Related papers (2023-10-20T20:17:09Z) - A Closer Look into Automatic Evaluation Using Large Language Models [75.49360351036773]
We discuss how details in the evaluation process change how well the ratings given by LLMs correlate with human ratings.
We find that the auto Chain-of-Thought (CoT) used in G-Eval does not always make G-Eval more aligned with human ratings.
We also show that forcing the LLM to output only a numeric rating, as in G-Eval, is suboptimal.
arXiv Detail & Related papers (2023-10-09T12:12:55Z) - Learning Personalized Alignment for Evaluating Open-ended Text Generation [44.565686959174585]
PerSE is an interpretable evaluation framework designed to assess alignment with specific human preferences.
It is tuned to infer specific preferences from an in-context personal profile and evaluate the alignment between the generated content and personal preferences.
Our 13B LLaMA-2-based PerSE shows a 15.8% increase in Kendall correlation and a 13.7% rise in accuracy with zero-shot reviewers.
arXiv Detail & Related papers (2023-10-05T04:15:48Z) - Split and Merge: Aligning Position Biases in Large Language Model based
Evaluators [23.38206418382832]
PORTIA is an alignment-based system designed to mimic human comparison strategies to calibrate position bias.
Our results show that PORTIA markedly enhances the consistency rates for all the models and comparison forms tested.
It rectifies around 80% of the position bias instances within the GPT-4 model, elevating its consistency rate up to 98%.
arXiv Detail & Related papers (2023-09-29T14:38:58Z) - FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long
Form Text Generation [176.56131810249602]
evaluating the factuality of long-form text generated by large language models (LMs) is non-trivial.
We introduce FACTSCORE, a new evaluation that breaks a generation into a series of atomic facts and computes the percentage of atomic facts supported by a reliable knowledge source.
arXiv Detail & Related papers (2023-05-23T17:06:00Z) - G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment [64.01972723692587]
We present G-Eval, a framework of using large language models with chain-of-thoughts (CoT) and a form-filling paradigm to assess the quality of NLG outputs.
We show that G-Eval with GPT-4 as the backbone model achieves a Spearman correlation of 0.514 with human on summarization task, outperforming all previous methods by a large margin.
arXiv Detail & Related papers (2023-03-29T12:46:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.