Debate or Vote: Which Yields Better Decisions in Multi-Agent Large Language Models?
- URL: http://arxiv.org/abs/2508.17536v1
- Date: Sun, 24 Aug 2025 22:14:32 GMT
- Title: Debate or Vote: Which Yields Better Decisions in Multi-Agent Large Language Models?
- Authors: Hyeong Kyu Choi, Xiaojin Zhu, Yixuan Li,
- Abstract summary: Multi-Agent Debate(MAD) has emerged as a promising paradigm for improving the performance of large language models.<n>Despite recent advances, the key factors driving MAD's effectiveness remain unclear.<n>We disentangle MAD into two key components--Majority Voting and inter-agent Debate--and assess their respective contributions.
- Score: 24.932437142359305
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-Agent Debate~(MAD) has emerged as a promising paradigm for improving the performance of large language models through collaborative reasoning. Despite recent advances, the key factors driving MAD's effectiveness remain unclear. In this work, we disentangle MAD into two key components--Majority Voting and inter-agent Debate--and assess their respective contributions. Through extensive experiments across seven NLP benchmarks, we find that Majority Voting alone accounts for most of the performance gains typically attributed to MAD. To explain this, we propose a theoretical framework that models debate as a stochastic process. We prove that it induces a martingale over agents' belief trajectories, implying that debate alone does not improve expected correctness. Guided by these insights, we demonstrate that targeted interventions, by biasing the belief update toward correction, can meaningfully enhance debate effectiveness. Overall, our findings suggest that while MAD has potential, simple ensembling methods remain strong and more reliable alternatives in many practical settings. Code is released in https://github.com/deeplearning-wisc/debate-or-vote.
Related papers
- DynaDebate: Breaking Homogeneity in Multi-Agent Debate with Dynamic Path Generation [47.62978918069135]
We introduce Dynamic Multi-Agent Debate (DynaDebate), which enhances the effectiveness of multi-agent debate through three key mechanisms.<n>Extensive experiments demonstrate that DynaDebate achieves superior performance across various benchmarks, surpassing existing state-of-the-art MAD methods.
arXiv Detail & Related papers (2026-01-09T12:01:33Z) - Demystifying Multi-Agent Debate: The Role of Confidence and Diversity [31.236476720977294]
Multi-agent debate (MAD) is widely used to improve large language model (LLM) performance through test-time scaling.<n>Recent work shows that vanilla MAD often underperforms simple majority vote despite higher computational cost.<n>We identify two key mechanisms missing from vanilla MAD: (i) diversity of initial viewpoints and (ii) explicit, calibrated confidence communication.
arXiv Detail & Related papers (2026-01-09T02:38:30Z) - iMAD: Intelligent Multi-Agent Debate for Efficient and Accurate LLM Inference [11.86992814928132]
Multi-Agent Debate (MAD) has emerged as a promising framework that engages multiple agents in structured debates.<n>We propose intelligent Multi-Agent Debate (iMAD), a token-efficient framework that selectively triggers MAD only when it is likely to be beneficial.<n>We show that iMAD significantly reduces token usage (by up to 92%) while also improving final answer accuracy (by up to 13.5%)
arXiv Detail & Related papers (2025-11-14T13:50:51Z) - Towards Scalable Oversight with Collaborative Multi-Agent Debate in Error Detection [81.52796950244705]
Self-diagnosis is unreliable on complex tasks unless aided by reliable external feedback.<n>We introduce a new collaborative MAD protocol, termed ColMAD, that reframes MAD as a non-zero sum game.<n>We show that ColMAD significantly outperforms previous competitive MAD by 19%.
arXiv Detail & Related papers (2025-10-23T19:46:00Z) - MADIAVE: Multi-Agent Debate for Implicit Attribute Value Extraction [52.89860691282002]
Implicit Attribute Value Extraction (AVE) is essential for accurately representing products in e-commerce.<n>Despite advances in multimodal large language models (MLLMs), implicit AVE remains challenging due to the complexity of multidimensional data.<n>We introduce textscmodelname, a multi-agent debate framework that employs multiple MLLM agents to iteratively refine inferences.
arXiv Detail & Related papers (2025-10-07T06:27:42Z) - Enhancing Multi-Agent Debate System Performance via Confidence Expression [55.34012400580016]
Multi-Agent Debate (MAD) systems simulate human debate and thereby improve task performance.<n>Some Large Language Models (LLMs) possess superior knowledge or reasoning capabilities for specific tasks, but struggle to clearly communicate this advantage during debates.<n>Inappropriate confidence expression can cause agents in MAD systems to either stubbornly maintain incorrect beliefs or converge prematurely on suboptimal answers.<n>We develop ConfMAD, a MAD framework that integrates confidence expression throughout the debate process.
arXiv Detail & Related papers (2025-09-17T14:34:27Z) - Free-MAD: Consensus-Free Multi-Agent Debate [17.384699873512464]
Multi-agent debate (MAD) is an emerging approach to improving the reasoning capabilities of large language models (LLMs)<n>Existing MAD methods rely on multiple rounds of interaction among agents to reach consensus, and the final output is selected by majority voting in the last round.<n>We propose textscFree-MAD, a novel MAD framework that eliminates the need for consensus among agents.
arXiv Detail & Related papers (2025-09-14T01:55:01Z) - Debating Truth: Debate-driven Claim Verification with Multiple Large Language Model Agents [13.626715532559079]
We propose DebateCV, the first claim verification framework that adopts a debate-driven methodology using multiple LLM agents.<n>In our framework, two Debaters take opposing stances on a claim and engage in multi-round argumentation, while a Moderator evaluates the arguments and renders a verdict with justifications.<n> Experimental results show that our method outperforms existing claim verification methods under varying levels of evidence quality.
arXiv Detail & Related papers (2025-07-25T09:19:25Z) - CortexDebate: Debating Sparsely and Equally for Multi-Agent Debate [11.155092859033784]
Multi-Agent Debate (MAD) has emerged as an effective strategy to mitigate issues with single Large Language Model (LLM)<n>Existing MAD methods face two major issues: (a) too lengthy input contexts, which causes LLM agents to get lost in plenty of input information and experiences performance drop; and (b) the overconfidence dilemma, where self-assured LLM agents dominate the debate, leading to low debating effectiveness.<n>We propose a novel MAD method called "CortexDebate", inspired by the human brain's tendency to establish a sparse and dynamically optimized network among cortical areas governed by white matter.
arXiv Detail & Related papers (2025-07-05T07:23:15Z) - Stop Overvaluing Multi-Agent Debate -- We Must Rethink Evaluation and Embrace Model Heterogeneity [20.408720462383158]
Multi-agent debate (MAD) has gained significant attention as a promising line of research to improve the factual accuracy and reasoning capabilities of large language models (LLMs)<n>Despite its conceptual appeal, current MAD research suffers from critical limitations in evaluation practices.<n>This paper presents a systematic evaluation of 5 representative MAD methods across 9 benchmarks using 4 foundational models.
arXiv Detail & Related papers (2025-02-12T21:01:10Z) - ACC-Collab: An Actor-Critic Approach to Multi-Agent LLM Collaboration [20.040543142468344]
ACC-Collab is an Actor-Critic based learning framework to produce a two-agent team specialized in collaboration.<n>We demonstrate that ACC-Collab outperforms SotA multi-agent techniques on a wide array of benchmarks.
arXiv Detail & Related papers (2024-10-30T19:09:02Z) - DebUnc: Improving Large Language Model Agent Communication With Uncertainty Metrics [52.242449026151846]
Multi-agent debates have been introduced to improve the accuracy of Large Language Models (LLMs)<n>We propose DebUnc, a debate framework that uses uncertainty metrics to assess agent confidence.
arXiv Detail & Related papers (2024-07-08T22:15:01Z) - Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate [85.3444184685235]
We propose a Multi-Agent Debate (MAD) framework, in which multiple agents express their arguments in the state of "tit for tat" and a judge manages the debate process to obtain a final solution.
Our framework encourages divergent thinking in LLMs which would be helpful for tasks that require deep levels of contemplation.
arXiv Detail & Related papers (2023-05-30T15:25:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.