WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild
- URL: http://arxiv.org/abs/2406.04770v2
- Date: Sat, 05 Oct 2024 22:39:51 GMT
- Title: WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild
- Authors: Bill Yuchen Lin, Yuntian Deng, Khyathi Chandu, Faeze Brahman, Abhilasha Ravichander, Valentina Pyatkin, Nouha Dziri, Ronan Le Bras, Yejin Choi,
- Abstract summary: We introduce WildBench, an automated evaluation framework designed to benchmark large language models (LLMs)
WildBench consists of 1,024 tasks carefully selected from over one million human-chatbot conversation logs.
We have developed two metrics, WB-Reward and WB-Score, which are computable using advanced LLMs.
- Score: 57.272096543738336
- License:
- Abstract: We introduce WildBench, an automated evaluation framework designed to benchmark large language models (LLMs) using challenging, real-world user queries. WildBench consists of 1,024 tasks carefully selected from over one million human-chatbot conversation logs. For automated evaluation with WildBench, we have developed two metrics, WB-Reward and WB-Score, which are computable using advanced LLMs such as GPT-4-turbo. WildBench evaluation uses task-specific checklists to evaluate model outputs systematically and provides structured explanations that justify the scores and comparisons, resulting in more reliable and interpretable automatic judgments. WB-Reward employs fine-grained pairwise comparisons between model responses, generating five potential outcomes: much better, slightly better, slightly worse, much worse, or a tie. Unlike previous evaluations that employed a single baseline model, we selected three baseline models at varying performance levels to ensure a comprehensive pairwise evaluation. Additionally, we propose a simple method to mitigate length bias, by converting outcomes of ``slightly better/worse'' to ``tie'' if the winner response exceeds the loser one by more than $K$ characters. WB-Score evaluates the quality of model outputs individually, making it a fast and cost-efficient evaluation metric. WildBench results demonstrate a strong correlation with the human-voted Elo ratings from Chatbot Arena on hard tasks. Specifically, WB-Reward achieves a Pearson correlation of 0.98 with top-ranking models. Additionally, WB-Score reaches 0.95, surpassing both ArenaHard's 0.91 and AlpacaEval2.0's 0.89 for length-controlled win rates, as well as the 0.87 for regular win rates.
Related papers
- Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates [37.56003689042975]
We show that even a "null model" that always outputs a constant response can cheat automatic benchmarks and achieve top-ranked win rates.
Our findings call for the development of anti-cheating mechanisms for reliable automatic benchmarks.
arXiv Detail & Related papers (2024-10-09T17:53:06Z) - TurtleBench: Evaluating Top Language Models via Real-World Yes/No Puzzles [2.8839090723566296]
TurtleBench collects real user guesses from our online Turtle Soup Puzzle platform.
TurtleBench includes 1,532 user guesses along with the correctness of guesses after annotation.
We thoroughly evaluated nine of the most advanced Large Language Models available today.
arXiv Detail & Related papers (2024-10-07T17:58:47Z) - LiveBench: A Challenging, Contamination-Free LLM Benchmark [101.21578097087699]
We release LiveBench, the first benchmark that contains frequently-updated questions from recent information sources.
We evaluate many prominent closed-source models, as well as dozens of open-source models ranging from 0.5B to 110B in size.
Questions will be added and updated on a monthly basis, and we will release new tasks and harder versions of tasks over time.
arXiv Detail & Related papers (2024-06-27T16:47:42Z) - From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and BenchBuilder Pipeline [47.19203597218352]
BenchBuilder is an automated pipeline that curates high-quality, open-ended prompts from large, crowd-sourced datasets.
We release Arena-Hard-Auto, a benchmark consisting 500 challenging prompts curated by BenchBuilder.
Our work sets a new framework for the scalable curation of automated benchmarks from extensive data.
arXiv Detail & Related papers (2024-06-17T17:26:10Z) - MixEval: Deriving Wisdom of the Crowd from LLM Benchmark Mixtures [57.886592207948844]
We propose MixEval, a new paradigm for establishing efficient, gold-standard evaluation by strategically mixing off-the-shelf benchmarks.
It bridges (1) comprehensive and well-distributed real-world user queries and (2) efficient and fairly-graded ground-truth-based benchmarks, by matching queries mined from the web with similar queries from existing benchmarks.
arXiv Detail & Related papers (2024-06-03T05:47:05Z) - SimPO: Simple Preference Optimization with a Reference-Free Reward [43.136307294076545]
Direct Preference Optimization (DPO) is a widely used offline preference optimization algorithm.
We propose SimPO, a simpler yet more effective approach to DPO.
SimPO consistently and significantly outperforms DPO without substantially increasing response length.
arXiv Detail & Related papers (2024-05-23T16:01:46Z) - Preserving Knowledge Invariance: Rethinking Robustness Evaluation of
Open Information Extraction [50.62245481416744]
We present the first benchmark that simulates the evaluation of open information extraction models in the real world.
We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique.
By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques.
arXiv Detail & Related papers (2023-05-23T12:05:09Z) - GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models [60.48306899271866]
We present a new framework, called GREAT Score, for global robustness evaluation of adversarial perturbation using generative models.
We show high correlation and significantly reduced cost of GREAT Score when compared to the attack-based model ranking on RobustBench.
GREAT Score can be used for remote auditing of privacy-sensitive black-box models.
arXiv Detail & Related papers (2023-04-19T14:58:27Z) - Beyond User Self-Reported Likert Scale Ratings: A Comparison Model for
Automatic Dialog Evaluation [69.03658685761538]
Open Domain dialog system evaluation is one of the most important challenges in dialog research.
We propose an automatic evaluation model CMADE that automatically cleans self-reported user ratings as it trains on them.
Our experiments show that CMADE achieves 89.2% accuracy in the dialog comparison task.
arXiv Detail & Related papers (2020-05-21T15:14:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.