Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates
- URL: http://arxiv.org/abs/2410.07137v1
- Date: Wed, 9 Oct 2024 17:53:06 GMT
- Title: Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates
- Authors: Xiaosen Zheng, Tianyu Pang, Chao Du, Qian Liu, Jing Jiang, Min Lin,
- Abstract summary: We show that even a "null model" that always outputs a constant response can cheat automatic benchmarks and achieve top-ranked win rates.
Our findings call for the development of anti-cheating mechanisms for reliable automatic benchmarks.
- Score: 37.56003689042975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic LLM benchmarks, such as AlpacaEval 2.0, Arena-Hard-Auto, and MT-Bench, have become popular for evaluating language models due to their cost-effectiveness and scalability compared to human evaluation. Achieving high win rates on these benchmarks can significantly boost the promotional impact of newly released language models. This promotional benefit may motivate tricks, such as manipulating model output length or style to game win rates, even though several mechanisms have been developed to control length and disentangle style to reduce gameability. Nonetheless, we show that even a "null model" that always outputs a constant response (irrelevant to input instructions) can cheat automatic benchmarks and achieve top-ranked win rates: an 86.5% LC win rate on AlpacaEval 2.0; an 83.0 score on Arena-Hard-Auto; and a 9.55 score on MT-Bench. Moreover, the crafted cheating outputs are transferable because we assume that the instructions of these benchmarks (e.g., 805 samples of AlpacaEval 2.0) are private and cannot be accessed. While our experiments are primarily proof-of-concept, an adversary could use LLMs to generate more imperceptible cheating responses, unethically benefiting from high win rates and promotional impact. Our findings call for the development of anti-cheating mechanisms for reliable automatic benchmarks. The code is available at https://github.com/sail-sg/Cheating-LLM-Benchmarks.
Related papers
- Inference-Time Decontamination: Reusing Leaked Benchmarks for Large Language Model Evaluation [61.350306618479365]
Leakage of benchmarks can prevent the accurate assessment of large language models' true performance.
We propose Inference-Time Decontamination (ITD) to address this issue.
ITD reduces inflated accuracy by 22.9% on GSM8K and 19.0% on MMLU.
arXiv Detail & Related papers (2024-06-20T04:35:59Z) - From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and BenchBuilder Pipeline [47.19203597218352]
BenchBuilder is an automated pipeline that curates high-quality, open-ended prompts from large, crowd-sourced datasets.
We release Arena-Hard-Auto, a benchmark consisting 500 challenging prompts curated by BenchBuilder.
Our work sets a new framework for the scalable curation of automated benchmarks from extensive data.
arXiv Detail & Related papers (2024-06-17T17:26:10Z) - WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild [57.272096543738336]
We introduce WildBench, an automated evaluation framework designed to benchmark large language models (LLMs)
WildBench consists of 1,024 tasks carefully selected from over one million human-chatbot conversation logs.
We have developed two metrics, WB-Reward and WB-Score, which are computable using advanced LLMs.
arXiv Detail & Related papers (2024-06-07T09:15:44Z) - Auto-Arena: Automating LLM Evaluations with Agent Peer Battles and Committee Discussions [77.66677127535222]
Auto-Arena is an innovative framework that automates the entire evaluation process using LLM-powered agents.
In our experiments, Auto-Arena shows a 92.14% correlation with human preferences, surpassing all previous expert-annotated benchmarks.
arXiv Detail & Related papers (2024-05-30T17:19:19Z) - The False Promise of Imitating Proprietary LLMs [158.65692029352584]
An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model.
This approach looks to cheaply imitate the proprietary model's capabilities using a weaker open-source model.
We first finetune a series of LMs that imitate ChatGPT using varying base model sizes.
We then evaluate the models using crowd raters and canonical NLP benchmarks.
arXiv Detail & Related papers (2023-05-25T05:00:12Z) - AIBench Training: Balanced Industry-Standard AI Training Benchmarking [26.820244556465333]
Earlier-stage evaluations of a new AI architecture/system need affordable benchmarks.
We use real-world benchmarks to cover the factors space that impacts the learning dynamics.
We contribute by far the most comprehensive AI training benchmark suite.
arXiv Detail & Related papers (2020-04-30T11:08:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.