G-Designer: Architecting Multi-agent Communication Topologies via Graph Neural Networks
- URL: http://arxiv.org/abs/2410.11782v1
- Date: Tue, 15 Oct 2024 17:01:21 GMT
- Title: G-Designer: Architecting Multi-agent Communication Topologies via Graph Neural Networks
- Authors: Guibin Zhang, Yanwei Yue, Xiangguo Sun, Guancheng Wan, Miao Yu, Junfeng Fang, Kun Wang, Dawei Cheng,
- Abstract summary: We introduce G-Designer, an adaptive, efficient, and robust solution for multi-agent deployment.
G-Designer dynamically designs task-aware, customized communication topologies.
- Score: 14.024988515071431
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in large language model (LLM)-based agents have demonstrated that collective intelligence can significantly surpass the capabilities of individual agents, primarily due to well-crafted inter-agent communication topologies. Despite the diverse and high-performing designs available, practitioners often face confusion when selecting the most effective pipeline for their specific task: \textit{Which topology is the best choice for my task, avoiding unnecessary communication token overhead while ensuring high-quality solution?} In response to this dilemma, we introduce G-Designer, an adaptive, efficient, and robust solution for multi-agent deployment, which dynamically designs task-aware, customized communication topologies. Specifically, G-Designer models the multi-agent system as a multi-agent network, leveraging a variational graph auto-encoder to encode both the nodes (agents) and a task-specific virtual node, and decodes a task-adaptive and high-performing communication topology. Extensive experiments on six benchmarks showcase that G-Designer is: \textbf{(1) high-performing}, achieving superior results on MMLU with accuracy at $84.50\%$ and on HumanEval with pass@1 at $89.90\%$; \textbf{(2) task-adaptive}, architecting communication protocols tailored to task difficulty, reducing token consumption by up to $95.33\%$ on HumanEval; and \textbf{(3) adversarially robust}, defending against agent adversarial attacks with merely $0.3\%$ accuracy drop.
Related papers
- Two Heads are Better Than One: Test-time Scaling of Multi-agent Collaborative Reasoning [29.580108004844856]
Multi-agent systems (MAS) built on large language models (LLMs) offer a promising path toward solving complex, real-world tasks.
Recent advancements in test-time scaling (TTS) have significantly improved single-agent performance on challenging reasoning tasks.
We introduce an adaptive multi-agent framework designed to enhance collaborative reasoning through both model-level training and system-level coordination.
arXiv Detail & Related papers (2025-04-14T00:27:45Z) - Graph Based Deep Reinforcement Learning Aided by Transformers for Multi-Agent Cooperation [2.8169258551959544]
We propose a novel framework that integrates Graph Neural Networks (GNNs), Deep Reinforcement Learning (DRL), and transformer-based mechanisms for enhanced multi-agent coordination and collective task execution.
Our approach leverages GNNs to model agent-agent and agent-goal interactions through adaptive graph construction, enabling efficient information aggregation and decision-making under constrained communication.
arXiv Detail & Related papers (2025-04-11T01:46:18Z) - Multi-agent Architecture Search via Agentic Supernet [17.235963703597093]
Large Language Model (LLM)-empowered multi-agent systems extend the cognitive boundaries of individual agents.
Despite the availability of methods to automate the design of agentic, they typically seek to identify a static, complex, one-size-fits-all system.
We introduce MaAS, an automated framework that samples query-dependent agentic systems from the supernet.
arXiv Detail & Related papers (2025-02-06T16:12:06Z) - FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression [55.992528247880685]
Decentralized training faces significant challenges regarding system design and efficiency.
We present FusionLLM, a decentralized training system designed and implemented for training large deep neural networks (DNNs)
We show that our system and method can achieve 1.45 - 9.39x speedup compared to baseline methods while ensuring convergence.
arXiv Detail & Related papers (2024-10-16T16:13:19Z) - Cut the Crap: An Economical Communication Pipeline for LLM-based Multi-Agent Systems [42.137278756052595]
$texttAgentPrune$ can seamlessly integrate into mainstream multi-agent systems.
textbf(I) integrates seamlessly into existing multi-agent frameworks with $28.1%sim72.8%downarrow$ token reduction.
textbf(III) successfully defend against two types of agent-based adversarial attacks with $3.5%sim10.8%uparrow$ performance boost.
arXiv Detail & Related papers (2024-10-03T14:14:31Z) - On the Resilience of LLM-Based Multi-Agent Collaboration with Faulty Agents [58.79302663733703]
Large language model-based multi-agent systems have shown great abilities across various tasks due to the collaboration of expert agents.
However, the impact of clumsy or even malicious agents, on the overall performance of the system remains underexplored.
This paper investigates what is the resilience of various system structures under faulty agents.
arXiv Detail & Related papers (2024-08-02T03:25:20Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - Generative AI Agents with Large Language Model for Satellite Networks via a Mixture of Experts Transmission [74.10928850232717]
This paper develops generative artificial intelligence (AI) agents for model formulation and then applies a mixture of experts (MoE) to design transmission strategies.
Specifically, we leverage large language models (LLMs) to build an interactive modeling paradigm.
We propose an MoE-proximal policy optimization (PPO) approach to solve the formulated problem.
arXiv Detail & Related papers (2024-04-14T03:44:54Z) - A Dynamic LLM-Powered Agent Network for Task-Oriented Agent Collaboration [55.35849138235116]
We propose automatically selecting a team of agents from candidates to collaborate in a dynamic communication structure toward different tasks and domains.
Specifically, we build a framework named Dynamic LLM-Powered Agent Network ($textDyLAN$) for LLM-powered agent collaboration.
We demonstrate that DyLAN outperforms strong baselines in code generation, decision-making, general reasoning, and arithmetic reasoning tasks with moderate computational cost.
arXiv Detail & Related papers (2023-10-03T16:05:48Z) - Exposing and Addressing Cross-Task Inconsistency in Unified
Vision-Language Models [80.23791222509644]
Inconsistent AI models are considered brittle and untrustworthy by human users.
We find that state-of-the-art vision-language models suffer from a surprisingly high degree of inconsistent behavior across tasks.
We propose a rank correlation-based auxiliary training objective, computed over large automatically created cross-task contrast sets.
arXiv Detail & Related papers (2023-03-28T16:57:12Z) - Intelligent Trajectory Design for RIS-NOMA aided Multi-robot
Communications [59.34642007625687]
The goal is to maximize the sum-rate of whole trajectories for multi-robot system by jointly optimizing trajectories and NOMA decoding orders of robots.
An integrated machine learning (ML) scheme is proposed, which combines long short-term memory (LSTM)-autoregressive integrated moving average (ARIMA) model and dueling double deep Q-network (D$3$QN) algorithm.
arXiv Detail & Related papers (2022-05-03T17:14:47Z) - Multi-Agent Embodied Visual Semantic Navigation with Scene Prior
Knowledge [42.37872230561632]
In visual semantic navigation, the robot navigates to a target object with egocentric visual observations and the class label of the target is given.
Most of the existing models are only effective for single-agent navigation, and a single agent has low efficiency and poor fault tolerance when completing more complicated tasks.
We propose the multi-agent visual semantic navigation, in which multiple agents collaborate with others to find multiple target objects.
arXiv Detail & Related papers (2021-09-20T13:31:03Z) - Accelerating Distributed Online Meta-Learning via Multi-Agent
Collaboration under Limited Communication [24.647993999787992]
We propose a multi-agent online meta-learning framework and cast it as an equivalent two-level nested online convex optimization (OCO) problem.
By characterizing the upper bound of the agent-task-averaged regret, we show that the performance of multi-agent online meta-learning depends heavily on how much an agent can benefit from the distributed network-level OCO for meta-model updates via limited communication.
We show that a factor of $sqrt1/N$ speedup over the optimal single-agent regret $O(sqrtT)$ after $
arXiv Detail & Related papers (2020-12-15T23:08:36Z) - Message-Aware Graph Attention Networks for Large-Scale Multi-Robot Path
Planning [12.988435681305281]
Graph Neural Networks (GNNs) have become popular due to their ability to learn communication policies in decentralized multi-agent systems.
We extend our previous work that utilizes GNNs in multi-agent path planning by incorporating a novel mechanism to allow for message-dependent attention.
Our Message-Aware Graph Attention neTwork (MAGAT) is based on a key-query-like mechanism that determines the relative importance of features in the messages received from various neighboring robots.
arXiv Detail & Related papers (2020-11-26T10:37:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.