Finding Common Ground: Using Large Language Models to Detect Agreement in Multi-Agent Decision Conferences
- URL: http://arxiv.org/abs/2507.08440v1
- Date: Fri, 11 Jul 2025 09:31:10 GMT
- Title: Finding Common Ground: Using Large Language Models to Detect Agreement in Multi-Agent Decision Conferences
- Authors: Selina Heller, Mohamed Ibrahim, David Antony Selby, Sebastian Vollmer,
- Abstract summary: Large Language Models (LLMs) have shown significant promise in simulating real-world scenarios.<n>We present a novel LLM-based multi-agent system designed to simulate group decision conferences.<n>Our results indicate that LLMs can reliably detect agreement even in dynamic and nuanced debates.
- Score: 2.0543882079879996
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decision conferences are structured, collaborative meetings that bring together experts from various fields to address complex issues and reach a consensus on recommendations for future actions or policies. These conferences often rely on facilitated discussions to ensure productive dialogue and collective agreement. Recently, Large Language Models (LLMs) have shown significant promise in simulating real-world scenarios, particularly through collaborative multi-agent systems that mimic group interactions. In this work, we present a novel LLM-based multi-agent system designed to simulate decision conferences, specifically focusing on detecting agreement among the participant agents. To achieve this, we evaluate six distinct LLMs on two tasks: stance detection, which identifies the position an agent takes on a given issue, and stance polarity detection, which identifies the sentiment as positive, negative, or neutral. These models are further assessed within the multi-agent system to determine their effectiveness in complex simulations. Our results indicate that LLMs can reliably detect agreement even in dynamic and nuanced debates. Incorporating an agreement-detection agent within the system can also improve the efficiency of group debates and enhance the overall quality and coherence of deliberations, making them comparable to real-world decision conferences regarding outcome and decision-making. These findings demonstrate the potential for LLM-based multi-agent systems to simulate group decision-making processes. They also highlight that such systems could be instrumental in supporting decision-making with expert elicitation workshops across various domains.
Related papers
- Towards Agentic Recommender Systems in the Era of Multimodal Large Language Models [75.4890331763196]
Recent breakthroughs in Large Language Models (LLMs) have led to the emergence of agentic AI systems.<n>LLM-based Agentic RS (LLM-ARS) can offer more interactive, context-aware, and proactive recommendations.
arXiv Detail & Related papers (2025-03-20T22:37:15Z) - Is Multi-Agent Debate (MAD) the Silver Bullet? An Empirical Analysis of MAD in Code Summarization and Translation [10.038721196640864]
Multi-Agent Debate (MAD) systems enable structured debates among Large Language Models (LLMs)<n> MAD promotes divergent thinking through role-specific agents, dynamic interactions, and structured decision-making.<n>This study investigates MAD's effectiveness on two Software Engineering (SE) tasks.
arXiv Detail & Related papers (2025-03-15T07:30:37Z) - Beyond Self-Talk: A Communication-Centric Survey of LLM-Based Multi-Agent Systems [23.379992200838053]
Large language model-based multi-agent systems have recently gained significant attention due to their potential for complex, collaborative, and intelligent problem-solving capabilities.<n>Existing surveys typically categorize LLM-MAS according to their application domains or architectures, overlooking the central role of communication in coordinating agent behaviors and interactions.<n>This review aims to help researchers and practitioners gain a clear understanding of the communication mechanisms in LLM-MAS, thereby facilitating the design and deployment of robust, scalable, and secure multi-agent systems.
arXiv Detail & Related papers (2025-02-20T07:18:34Z) - Agentic LLM Framework for Adaptive Decision Discourse [2.4919169815423743]
This study introduces a real-world inspired agentic Large Language Models (LLMs) framework.<n>Unlike traditional decision-support tools, the framework emphasizes dialogue, trade-off exploration, and the emergent synergies generated by interactions among agents.<n>Results reveal how the breadth-first exploration of alternatives fosters robust and equitable recommendation pathways.
arXiv Detail & Related papers (2025-02-16T03:46:37Z) - RoundTable: Investigating Group Decision-Making Mechanism in Multi-Agent Collaboration [49.4875652673051]
We analyze how different voting rules affect decision quality and efficiency in a multi-round collaboration.<n>At the extreme, unanimous voting gives 87% lower initial performance than the best-performing method.<n>Our findings highlight the crucial role of group decision-making in optimizing MAS collaboration.
arXiv Detail & Related papers (2024-11-11T17:37:47Z) - Efficient Adaptation in Mixed-Motive Environments via Hierarchical Opponent Modeling and Planning [51.52387511006586]
We propose Hierarchical Opponent modeling and Planning (HOP), a novel multi-agent decision-making algorithm.
HOP is hierarchically composed of two modules: an opponent modeling module that infers others' goals and learns corresponding goal-conditioned policies.
HOP exhibits superior few-shot adaptation capabilities when interacting with various unseen agents, and excels in self-play scenarios.
arXiv Detail & Related papers (2024-06-12T08:48:06Z) - Persona Inconstancy in Multi-Agent LLM Collaboration: Conformity, Confabulation, and Impersonation [16.82101507069166]
Multi-agent AI systems can be used for simulating collective decision-making in scientific and practical applications.
We examine AI agent ensembles engaged in cross-national collaboration and debate by analyzing their private responses and chat transcripts.
Our findings suggest that multi-agent discussions can support collective AI decisions that more often reflect diverse perspectives.
arXiv Detail & Related papers (2024-05-06T21:20:35Z) - Large Multimodal Agents: A Survey [78.81459893884737]
Large language models (LLMs) have achieved superior performance in powering text-based AI agents.
There is an emerging research trend focused on extending these LLM-powered AI agents into the multimodal domain.
This review aims to provide valuable insights and guidelines for future research in this rapidly evolving field.
arXiv Detail & Related papers (2024-02-23T06:04:23Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - Large Language Model Enhanced Multi-Agent Systems for 6G Communications [94.45712802626794]
We propose a multi-agent system with customized communication knowledge and tools for solving communication related tasks using natural language.
We validate the effectiveness of the proposed multi-agent system by designing a semantic communication system.
arXiv Detail & Related papers (2023-12-13T02:35:57Z) - Multi-Agent Consensus Seeking via Large Language Models [6.336670103502898]
Multi-agent systems driven by large language models (LLMs) have shown promising abilities for solving complex tasks in a collaborative manner.<n>This work considers a fundamental problem in multi-agent collaboration: consensus seeking.
arXiv Detail & Related papers (2023-10-31T03:37:11Z) - Cooperation, Competition, and Maliciousness: LLM-Stakeholders Interactive Negotiation [52.930183136111864]
We propose using scorable negotiation to evaluate Large Language Models (LLMs)
To reach an agreement, agents must have strong arithmetic, inference, exploration, and planning capabilities.
We provide procedures to create new games and increase games' difficulty to have an evolving benchmark.
arXiv Detail & Related papers (2023-09-29T13:33:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.