AgentNet: Decentralized Evolutionary Coordination for LLM-based Multi-Agent Systems
- URL: http://arxiv.org/abs/2504.00587v2
- Date: Thu, 29 May 2025 18:55:08 GMT
- Title: AgentNet: Decentralized Evolutionary Coordination for LLM-based Multi-Agent Systems
- Authors: Yingxuan Yang, Huacan Chai, Shuai Shao, Yuanyi Song, Siyuan Qi, Renting Rui, Weinan Zhang,
- Abstract summary: AgentNet is a decentralized, Retrieval-Augmented Generation (RAG)-based framework for multi-agent systems.<n>Unlike prior approaches with static roles or centralized control, AgentNet allows agents to adjust connectivity and route tasks based on local expertise and context.<n>Experiments show that AgentNet achieves higher task accuracy than both single-agent and centralized multi-agent baselines.
- Score: 22.291969093748005
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid advancement of large language models (LLMs) has enabled the development of multi-agent systems where multiple LLM-based agents collaborate on complex tasks. However, existing systems often rely on centralized coordination, leading to scalability bottlenecks, reduced adaptability, and single points of failure. Privacy and proprietary knowledge concerns further hinder cross-organizational collaboration, resulting in siloed expertise. We propose AgentNet, a decentralized, Retrieval-Augmented Generation (RAG)-based framework that enables LLM-based agents to specialize, evolve, and collaborate autonomously in a dynamically structured Directed Acyclic Graph (DAG). Unlike prior approaches with static roles or centralized control, AgentNet allows agents to adjust connectivity and route tasks based on local expertise and context. AgentNet introduces three key innovations: (1) a fully decentralized coordination mechanism that eliminates the need for a central orchestrator, enhancing robustness and emergent intelligence; (2) dynamic agent graph topology that adapts in real time to task demands, ensuring scalability and resilience; and (3) a retrieval-based memory system for agents that supports continual skill refinement and specialization. By minimizing centralized control and data exchange, AgentNet enables fault-tolerant, privacy-preserving collaboration across organizations. Experiments show that AgentNet achieves higher task accuracy than both single-agent and centralized multi-agent baselines.
Related papers
- AgentsNet: Coordination and Collaborative Reasoning in Multi-Agent LLMs [8.912989700822127]
We propose AgentsNet, a new benchmark for multi-agent reasoning.<n>We evaluate a variety of baseline methods on AgentsNet including homogeneous networks of agents.<n>We find that some frontier LLMs are already demonstrating strong performance for small networks but begin to fall off once the size of the network scales.
arXiv Detail & Related papers (2025-07-11T14:13:22Z) - Agent-as-a-Service based on Agent Network [9.5094423572869]
We propose Agent-as-a-Service based on Agent Network (A-AN), a service-oriented paradigm grounded in the Role-Goal-Process-Service (RGPS) standard.<n>A-AN unifies the entire agent lifecycle, including construction, integration, interoperability, and networked collaboration.<n>We release a dataset containing 10,000 long-horizon multi-agent to facilitate future research on long-chain collaboration in MAS.
arXiv Detail & Related papers (2025-05-13T11:15:19Z) - LLM-Powered Decentralized Generative Agents with Adaptive Hierarchical Knowledge Graph for Cooperative Planning [12.996741471128539]
Developing intelligent agents for long-term cooperation in dynamic open-world scenarios is a major challenge in multi-agent systems.<n>We propose Decentralized Adaptive Knowledge Graph Memory and Structured Communication System (DAMCS) in a novel Multi-agent Crafter environment.<n>Our generative agents, powered by Large Language Models (LLMs), are more scalable than traditional MARL agents by leveraging external knowledge and language for long-term planning and reasoning.
arXiv Detail & Related papers (2025-02-08T05:26:02Z) - Contextual Knowledge Sharing in Multi-Agent Reinforcement Learning with Decentralized Communication and Coordination [0.9776703963093367]
Multi-Agent Reinforcement Learning (Dec-MARL) has emerged as a pivotal approach for addressing complex tasks in dynamic environments.
This paper presents a novel Dec-MARL framework that integrates peer-to-peer communication and coordination, incorporating goal-awareness and time-awareness into the agents' knowledge-sharing processes.
arXiv Detail & Related papers (2025-01-26T22:49:50Z) - MorphAgent: Empowering Agents through Self-Evolving Profiles and Decentralized Collaboration [8.078098082305575]
This paper introduces MorphAgent, a novel framework for decentralized multi-agent collaboration.
MorphAgent employs self-evolving agent profiles, optimized through three key metrics.
Our experimental results show that MorphAgent outperforms traditional static-role MAS in terms of task performance and adaptability to changing requirements.
arXiv Detail & Related papers (2024-10-19T09:10:49Z) - DAWN: Designing Distributed Agents in a Worldwide Network [0.38447712214412116]
DAWN enables distributed agents worldwide to register and be easily discovered through Gateway Agents.
No-LLM Mode for deterministic tasks, Copilot for augmented decision-making, and LLM Agent for autonomous operations.
DAWN ensures the safety and security of agent collaborations globally through a dedicated safety, security, and compliance layer.
arXiv Detail & Related papers (2024-10-11T18:47:04Z) - Gödel Agent: A Self-Referential Agent Framework for Recursive Self-Improvement [117.94654815220404]
G"odel Agent is a self-evolving framework inspired by the G"odel machine.
G"odel Agent can achieve continuous self-improvement, surpassing manually crafted agents in performance, efficiency, and generalizability.
arXiv Detail & Related papers (2024-10-06T10:49:40Z) - Internet of Agents: Weaving a Web of Heterogeneous Agents for Collaborative Intelligence [79.5316642687565]
Existing multi-agent frameworks often struggle with integrating diverse capable third-party agents.
We propose the Internet of Agents (IoA), a novel framework that addresses these limitations.
IoA introduces an agent integration protocol, an instant-messaging-like architecture design, and dynamic mechanisms for agent teaming and conversation flow control.
arXiv Detail & Related papers (2024-07-09T17:33:24Z) - EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms [55.77492625524141]
EvoAgent is a generic method to automatically extend specialized agents to multi-agent systems.<n>We show that EvoAgent can significantly enhance the task-solving capability of LLM-based agents.
arXiv Detail & Related papers (2024-06-20T11:49:23Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - AgentScope: A Flexible yet Robust Multi-Agent Platform [66.64116117163755]
AgentScope is a developer-centric multi-agent platform with message exchange as its core communication mechanism.
The abundant syntactic tools, built-in agents and service functions, user-friendly interfaces for application demonstration and utility monitor, zero-code programming workstation, and automatic prompt tuning mechanism significantly lower the barriers to both development and deployment.
arXiv Detail & Related papers (2024-02-21T04:11:28Z) - S-Agents: Self-organizing Agents in Open-ended Environments [15.700383873385892]
We introduce a self-organizing agent system (S-Agents) with a "tree of agents" structure for dynamic workflow.
This structure can autonomously coordinate a group of agents, efficiently addressing the challenges of open and dynamic environments.
Our experiments demonstrate that S-Agents proficiently execute collaborative building tasks and resource collection in the Minecraft environment.
arXiv Detail & Related papers (2024-02-07T04:36:31Z) - Decentralized Control with Graph Neural Networks [147.84766857793247]
We propose a novel framework using graph neural networks (GNNs) to learn decentralized controllers.
GNNs are well-suited for the task since they are naturally distributed architectures and exhibit good scalability and transferability properties.
The problems of flocking and multi-agent path planning are explored to illustrate the potential of GNNs in learning decentralized controllers.
arXiv Detail & Related papers (2020-12-29T18:59:14Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.