Improving Your Model Ranking on Chatbot Arena by Vote Rigging
- URL: http://arxiv.org/abs/2501.17858v1
- Date: Wed, 29 Jan 2025 18:57:29 GMT
- Title: Improving Your Model Ranking on Chatbot Arena by Vote Rigging
- Authors: Rui Min, Tianyu Pang, Chao Du, Qian Liu, Minhao Cheng, Min Lin,
- Abstract summary: We show that crowdsourced voting can be rigged to improve the ranking of a target model $m_t$.
We conduct experiments on around $1.7$ million votes from the Elo Arena platform.
Our findings highlight the importance of continued efforts to prevent vote rigging.
- Score: 43.28854307528825
- License:
- Abstract: Chatbot Arena is a popular platform for evaluating LLMs by pairwise battles, where users vote for their preferred response from two randomly sampled anonymous models. While Chatbot Arena is widely regarded as a reliable LLM ranking leaderboard, we show that crowdsourced voting can be rigged to improve (or decrease) the ranking of a target model $m_{t}$. We first introduce a straightforward target-only rigging strategy that focuses on new battles involving $m_{t}$, identifying it via watermarking or a binary classifier, and exclusively voting for $m_{t}$ wins. However, this strategy is practically inefficient because there are over $190$ models on Chatbot Arena and on average only about $1\%$ of new battles will involve $m_{t}$. To overcome this, we propose omnipresent rigging strategies, exploiting the Elo rating mechanism of Chatbot Arena that any new vote on a battle can influence the ranking of the target model $m_{t}$, even if $m_{t}$ is not directly involved in the battle. We conduct experiments on around $1.7$ million historical votes from the Chatbot Arena Notebook, showing that omnipresent rigging strategies can improve model rankings by rigging only hundreds of new votes. While we have evaluated several defense mechanisms, our findings highlight the importance of continued efforts to prevent vote rigging. Our code is available at https://github.com/sail-sg/Rigging-ChatbotArena.
Related papers
- Exploring and Mitigating Adversarial Manipulation of Voting-Based Leaderboards [93.16294577018482]
Arena, the most popular benchmark of this type, ranks models by asking users to select the better response between two randomly selected models.
We show that an attacker can alter the leaderboard (to promote their favorite model or demote competitors) at the cost of roughly a thousand votes.
Our attack consists of two steps: first, we show how an attacker can determine which model was used to generate a given reply with more than $95%$ accuracy; and then, the attacker can use this information to consistently vote against a target model.
arXiv Detail & Related papers (2025-01-13T17:12:38Z) - Evaluating the Robustness of the "Ensemble Everything Everywhere" Defense [90.7494670101357]
Ensemble everything everywhere is a defense to adversarial examples.
We show that this defense is not robust to adversarial attack.
We then use standard adaptive attack techniques to reduce the defense's robust accuracy.
arXiv Detail & Related papers (2024-11-22T10:17:32Z) - Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference [48.99117537559644]
We introduce Arena, an open platform for evaluating Large Language Models (LLMs) based on human preferences.
Our methodology employs a pairwise comparison approach and leverages input from a diverse user base through crowdsourcing.
This paper describes the platform, analyzes the data we have collected so far, and explains the tried-and-true statistical methods we are using.
arXiv Detail & Related papers (2024-03-07T01:22:38Z) - Adding guardrails to advanced chatbots [5.203329540700177]
Launch of ChatGPT in November 2022 has ushered in a new era of AI.
There are already concerns that humans may be replaced by chatbots for a variety of jobs.
These biases may cause significant harm and/or inequity toward different subpopulations.
arXiv Detail & Related papers (2023-06-13T02:23:04Z) - On Safe and Usable Chatbots for Promoting Voter Participation [8.442334707366173]
We build a system that amplifies official information while personalizing it to users' unique needs.
Our approach can be a win-win for voters, election agencies trying to fulfill their mandate and democracy at large.
arXiv Detail & Related papers (2022-12-16T08:07:51Z) - MulBot: Unsupervised Bot Detection Based on Multivariate Time Series [2.525739800601558]
MulBot is an unsupervised bot detector based on multidimensional temporal features extracted from user timelines.
We perform a binary classification task achieving f1-score $= 0.99$, outperforming state-of-the-art methods.
We also demonstrate MulBot's strengths in a novel and practically-relevant task: detecting and separating different botnets.
arXiv Detail & Related papers (2022-09-21T13:56:12Z) - Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack [96.50202709922698]
A practical evaluation method should be convenient (i.e., parameter-free), efficient (i.e., fewer iterations) and reliable.
We propose a parameter-free Adaptive Auto Attack (A$3$) evaluation method which addresses the efficiency and reliability in a test-time-training fashion.
arXiv Detail & Related papers (2022-03-10T04:53:54Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.