Beyond Frameworks: Unpacking Collaboration Strategies in Multi-Agent Systems
- URL: http://arxiv.org/abs/2505.12467v1
- Date: Sun, 18 May 2025 15:46:14 GMT
- Title: Beyond Frameworks: Unpacking Collaboration Strategies in Multi-Agent Systems
- Authors: Haochun Wang, Sendong Zhao, Jingbo Wang, Zewen Qiang, Bing Qin, Ting Liu,
- Abstract summary: This study systematically investigates four dimensions of collaboration strategies.<n>We quantify the impact of these strategies on both task accuracy and computational efficiency.<n>This work establishes a foundation for designing adaptive, scalable multi-agent systems.
- Score: 29.924868489451327
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-agent collaboration has emerged as a pivotal paradigm for addressing complex, distributed tasks in large language model (LLM)-driven applications. While prior research has focused on high-level architectural frameworks, the granular mechanisms governing agents, critical to performance and scalability, remain underexplored. This study systematically investigates four dimensions of collaboration strategies: (1) agent governance, (2) participation control, (3) interaction dynamics, and (4) dialogue history management. Through rigorous experimentation under two context-dependent scenarios: Distributed Evidence Integration (DEI) and Structured Evidence Synthesis (SES), we quantify the impact of these strategies on both task accuracy and computational efficiency. Our findings reveal that centralized governance, instructor-led participation, ordered interaction patterns, and instructor-curated context summarization collectively optimize the trade-off between decision quality and resource utilization with the support of the proposed Token-Accuracy Ratio (TAR). This work establishes a foundation for designing adaptive, scalable multi-agent systems, shifting the focus from structural novelty to strategic interaction mechanics.
Related papers
- Multi-Agent Collaboration via Evolving Orchestration [61.93162413517026]
Large language models (LLMs) have achieved remarkable results across diverse downstream tasks, but their monolithic nature restricts scalability and efficiency in complex problem-solving.<n>We propose a puppeteer-style paradigm for LLM-based multi-agent collaboration, where a central orchestrator dynamically directs agents in response to evolving task states.<n> Experiments on closed- and open-domain scenarios show that this method achieves superior performance with reduced computational costs.
arXiv Detail & Related papers (2025-05-26T07:02:17Z) - Towards Multi-Agent Reasoning Systems for Collaborative Expertise Delegation: An Exploratory Design Study [45.90906050232582]
This paper systematically investigates how collaborative reasoning performance is affected by three key design dimensions.<n>Our findings reveal that expertise alignment benefits are highly domain-contingent, proving most effective for contextual reasoning tasks.<n>Finally, we empirically explore the impact of scaling the multi-agent system with expertise and study the computational trade off, highlighting the need for more efficient communication protocol design.
arXiv Detail & Related papers (2025-05-12T07:59:13Z) - Advancing Multi-Agent Systems Through Model Context Protocol: Architecture, Implementation, and Applications [0.0]
This paper introduces a comprehensive framework for advancing multi-agent systems through Model Context Protocol (MCP)<n>We extend previous work on AI agent architectures by developing a unified theoretical foundation, advanced context management techniques, and scalable coordination patterns.<n>We identify current limitations, emerging research opportunities, and potential transformative applications across industries.
arXiv Detail & Related papers (2025-04-26T03:43:03Z) - A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems [93.8285345915925]
Reasoning is a fundamental cognitive process that enables logical inference, problem-solving, and decision-making.<n>With the rapid advancement of large language models (LLMs), reasoning has emerged as a key capability that distinguishes advanced AI systems.<n>We categorize existing methods along two dimensions: (1) Regimes, which define the stage at which reasoning is achieved; and (2) Architectures, which determine the components involved in the reasoning process.
arXiv Detail & Related papers (2025-04-12T01:27:49Z) - MultiAgentBench: Evaluating the Collaboration and Competition of LLM agents [59.825725526176655]
Large Language Models (LLMs) have shown remarkable capabilities as autonomous agents.<n>Existing benchmarks either focus on single-agent tasks or are confined to narrow domains, failing to capture the dynamics of multi-agent coordination and competition.<n>We introduce MultiAgentBench, a benchmark designed to evaluate LLM-based multi-agent systems across diverse, interactive scenarios.
arXiv Detail & Related papers (2025-03-03T05:18:50Z) - A Cooperative Multi-Agent Framework for Zero-Shot Named Entity Recognition [71.61103962200666]
Zero-shot named entity recognition (NER) aims to develop entity recognition systems from unannotated text corpora.<n>Recent work has adapted large language models (LLMs) for zero-shot NER by crafting specialized prompt templates.<n>We introduce the cooperative multi-agent system (CMAS), a novel framework for zero-shot NER.
arXiv Detail & Related papers (2025-02-25T23:30:43Z) - Agentic LLM Framework for Adaptive Decision Discourse [2.4919169815423743]
This study introduces a real-world inspired agentic Large Language Models (LLMs) framework.<n>Unlike traditional decision-support tools, the framework emphasizes dialogue, trade-off exploration, and the emergent synergies generated by interactions among agents.<n>Results reveal how the breadth-first exploration of alternatives fosters robust and equitable recommendation pathways.
arXiv Detail & Related papers (2025-02-16T03:46:37Z) - Enhancing Cooperation through Selective Interaction and Long-term Experiences in Multi-Agent Reinforcement Learning [10.932974027102619]
This study introduces a computational framework based on multi-agent reinforcement learning in the spatial Prisoner's Dilemma game.
By modelling each agent using two distinct Q-networks, we disentangle the coevolutionary dynamics between cooperation and interaction.
arXiv Detail & Related papers (2024-05-04T12:42:55Z) - Hierarchical Decision Making Based on Structural Information Principles [19.82391136775341]
We propose a novel Structural Information principles-based framework, namely SIDM, for hierarchical Decision Making.<n>We present an abstraction mechanism that processes historical state-action trajectories to construct abstract representations of states and actions.<n>We develop a skill-based learning method for single-agent scenarios and a role-based collaboration method for multi-agent scenarios, both of which can flexibly integrate various underlying algorithms for enhanced performance.
arXiv Detail & Related papers (2024-04-15T13:02:00Z) - LLM as a Mastermind: A Survey of Strategic Reasoning with Large Language Models [75.89014602596673]
Strategic reasoning requires understanding and predicting adversary actions in multi-agent settings while adjusting strategies accordingly.
We explore the scopes, applications, methodologies, and evaluation metrics related to strategic reasoning with Large Language Models.
It underscores the importance of strategic reasoning as a critical cognitive capability and offers insights into future research directions and potential improvements.
arXiv Detail & Related papers (2024-04-01T16:50:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.