Cooperation Breakdown in LLM Agents Under Communication Delays
- URL: http://arxiv.org/abs/2602.11754v1
- Date: Thu, 12 Feb 2026 09:31:47 GMT
- Title: Cooperation Breakdown in LLM Agents Under Communication Delays
- Authors: Keita Nishimoto, Kimitaka Asatani, Ichiro Sakata,
- Abstract summary: We propose the FLCOA framework to conceptualize how cooperation and coordination emerge in groups of autonomous agents.<n>To examine the effect of communication delay, we introduce a Continuous Prisoner's Dilemma with Communication Delay.<n>We find that excessive delay reduces cycles of exploitation, yielding a U-shaped relationship between delay magnitude and mutual cooperation.
- Score: 3.619444603816032
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LLM-based multi-agent systems (LLM-MAS), in which autonomous AI agents cooperate to solve tasks, are gaining increasing attention. For such systems to be deployed in society, agents must be able to establish cooperation and coordination under real-world computational and communication constraints. We propose the FLCOA framework (Five Layers for Cooperation/Coordination among Autonomous Agents) to conceptualize how cooperation and coordination emerge in groups of autonomous agents, and highlight that the influence of lower-layer factors - especially computational and communication resources - has been largely overlooked. To examine the effect of communication delay, we introduce a Continuous Prisoner's Dilemma with Communication Delay and conduct simulations with LLM-based agents. As delay increases, agents begin to exploit slower responses even without explicit instructions. Interestingly, excessive delay reduces cycles of exploitation, yielding a U-shaped relationship between delay magnitude and mutual cooperation. These results suggest that fostering cooperation requires attention not only to high-level institutional design but also to lower-layer factors such as communication delay and resource allocation, pointing to new directions for MAS research.
Related papers
- Towards Adaptive, Scalable, and Robust Coordination of LLM Agents: A Dynamic Ad-Hoc Networking Perspective [31.81236449944822]
RAPS is a reputation-aware publish-subscribe paradigm for adaptive, scalable, and robust coordination of LLM agents.<n>RAPS incorporates two coherent overlays: (i) Reactive Subscription, enabling agents to dynamically refine their intents; and (ii) Bayesian Reputation, empowering each agent with a local watchdog to detect and isolate malicious peers.
arXiv Detail & Related papers (2026-02-08T15:26:02Z) - The Era of Agentic Organization: Learning to Organize with Language Models [107.41382234213893]
We introduce asynchronous thinking (AsyncThink) as a new paradigm of reasoning with large language models.<n> Experiments demonstrate that AsyncThink achieves 28% lower inference latency compared to parallel thinking.<n>AsyncThink generalizes its learned asynchronous thinking capabilities, effectively tackling unseen tasks without additional training.
arXiv Detail & Related papers (2025-10-30T16:25:10Z) - Can LLM Agents Solve Collaborative Tasks? A Study on Urgency-Aware Planning and Coordination [4.511923587827302]
Large Language Models (LLMs) have shown strong capabilities in communication, planning, and reasoning.<n>This study offers new insights into the strengths and failure modes of LLMs in physically grounded multi-agent collaboration tasks.
arXiv Detail & Related papers (2025-08-20T11:44:10Z) - Corrupted by Reasoning: Reasoning Language Models Become Free-Riders in Public Goods Games [87.5673042805229]
How large language models balance self-interest and collective well-being is a critical challenge for ensuring alignment, robustness, and safe deployment.<n>We adapt a public goods game with institutional choice from behavioral economics, allowing us to observe how different LLMs navigate social dilemmas.<n>Surprisingly, we find that reasoning LLMs, such as the o1 series, struggle significantly with cooperation.
arXiv Detail & Related papers (2025-06-29T15:02:47Z) - Multi-Agent Collaboration via Evolving Orchestration [55.574417128944226]
Large language models (LLMs) have achieved remarkable results across diverse downstream tasks, but their monolithic nature restricts scalability and efficiency in complex problem-solving.<n>We propose a puppeteer-style paradigm for LLM-based multi-agent collaboration, where a centralized orchestrator ("puppeteer") dynamically directs agents ("puppets") in response to evolving task states.<n> Experiments on closed- and open-domain scenarios show that this method achieves superior performance with reduced computational costs.
arXiv Detail & Related papers (2025-05-26T07:02:17Z) - Benchmarking LLMs' Swarm intelligence [51.648605206159125]
Large Language Models (LLMs) show potential for complex reasoning, yet their capacity for emergent coordination in Multi-Agent Systems (MAS) remains largely unexplored.<n>We introduce SwarmBench, a novel benchmark designed to systematically evaluate tasks of LLMs acting as decentralized agents.<n>We propose metrics for coordination effectiveness and analyze emergent group dynamics.
arXiv Detail & Related papers (2025-05-07T12:32:01Z) - Multi-Agent Autonomous Driving Systems with Large Language Models: A Survey of Recent Advances [61.539442227802226]
Large Language Models (LLMs) have been integrated into Autonomous Driving Systems (ADSs) to support high-level decision-making.<n>LLMs face three major challenges: limited perception, insufficient collaboration, and high computational demands.<n>Recent advances in multi-agent ADSs leverage language-driven communication and coordination to enhance inter-agent collaboration.
arXiv Detail & Related papers (2025-02-24T03:26:13Z) - CoDe: Communication Delay-Tolerant Multi-Agent Collaboration via Dual Alignment of Intent and Timeliness [21.627120541083553]
This paper proposes a novel framework, Communication Delay-tolerant Multi-Agent Collaboration (CoDe)<n>At first, CoDe learns an intent representation as messages through future action inference.<n>Then, CoDe devises a dual alignment mechanism of intent and timeliness to strengthen the fusion process of asynchronous messages.
arXiv Detail & Related papers (2025-01-09T12:57:41Z) - Towards Collaborative Intelligence: Propagating Intentions and Reasoning for Multi-Agent Coordination with Large Language Models [41.95288786980204]
Current agent frameworks often suffer from dependencies on single-agent execution and lack robust inter- module communication.
We present a framework for training large language models as collaborative agents to enable coordinated behaviors in cooperative MARL.
A propagation network transforms broadcast intentions into teammate-specific communication messages, sharing relevant goals with designated teammates.
arXiv Detail & Related papers (2024-07-17T13:14:00Z) - Embodied LLM Agents Learn to Cooperate in Organized Teams [46.331162216503344]
Large Language Models (LLMs) have emerged as integral tools for reasoning, planning, and decision-making.
This paper introduces a framework that imposes prompt-based organization structures on LLM agents to mitigate these problems.
arXiv Detail & Related papers (2024-03-19T06:39:47Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - DACOM: Learning Delay-Aware Communication for Multi-Agent Reinforcement
Learning [19.36041216505116]
We show that ignoring communication delays has detrimental effects on collaborations.
We design a delay-aware multi-agent communication model (DACOM) to adapt communication to delays.
Our experiments reveal that DACOM has a non-negligible performance improvement over other mechanisms.
arXiv Detail & Related papers (2022-12-03T14:20:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.