Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning
- URL: http://arxiv.org/abs/2403.06535v1
- Date: Mon, 11 Mar 2024 09:21:11 GMT
- Title: Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning
- Authors: Shuo Tang, Rui Ye, Chenxin Xu, Xiaowen Dong, Siheng Chen, Yanfeng Wang
- Abstract summary: Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
- Score: 57.652899266553035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decentralized and lifelong-adaptive multi-agent collaborative learning aims
to enhance collaboration among multiple agents without a central server, with
each agent solving varied tasks over time. To achieve efficient collaboration,
agents should: i) autonomously identify beneficial collaborative relationships
in a decentralized manner; and ii) adapt to dynamically changing task
observations. In this paper, we propose DeLAMA, a decentralized multi-agent
lifelong collaborative learning algorithm with dynamic collaboration graphs. To
promote autonomous collaboration relationship learning, we propose a
decentralized graph structure learning algorithm, eliminating the need for
external priors. To facilitate adaptation to dynamic tasks, we design a memory
unit to capture the agents' accumulated learning history and knowledge, while
preserving finite storage consumption. To further augment the system's
expressive capabilities and computational efficiency, we apply algorithm
unrolling, leveraging the advantages of both mathematical optimization and
neural networks. This allows the agents to `learn to collaborate' through the
supervision of training tasks. Our theoretical analysis verifies that
inter-agent collaboration is communication efficient under a small number of
communication rounds. The experimental results verify its ability to facilitate
the discovery of collaboration strategies and adaptation to dynamic learning
scenarios, achieving a 98.80% reduction in MSE and a 188.87% improvement in
classification accuracy. We expect our work can serve as a foundational
technique to facilitate future works towards an intelligent, decentralized, and
dynamic multi-agent system. Code is available at
https://github.com/ShuoTang123/DeLAMA.
Related papers
- Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Scaling Large-Language-Model-based Multi-Agent Collaboration [75.5241464256688]
Pioneering advancements in large language model-powered agents have underscored the design pattern of multi-agent collaboration.
Inspired by the neural scaling law, this study investigates whether a similar principle applies to increasing agents in multi-agent collaboration.
arXiv Detail & Related papers (2024-06-11T11:02:04Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - Asynchronous Message-Passing and Zeroth-Order Optimization Based Distributed Learning with a Use-Case in Resource Allocation in Communication Networks [11.182443036683225]
Distributed learning and adaptation have received significant interest and found wide-ranging applications in machine learning signal processing.
This paper specifically focuses on a scenario where agents collaborate towards a common task.
Agents, acting as transmitters, collaboratively train their individual policies to maximize a global reward.
arXiv Detail & Related papers (2023-11-08T11:12:27Z) - Unrolled Graph Learning for Multi-Agent Collaboration [37.239120967721156]
We propose a distributed multi-agent learning model inspired by human collaboration.
Agents can autonomously detect suitable collaborators and refer to collaborators' model for better performance.
arXiv Detail & Related papers (2022-10-31T07:05:44Z) - Behaviour-conditioned policies for cooperative reinforcement learning
tasks [41.74498230885008]
In various real-world tasks, an agent needs to cooperate with unknown partner agent types.
Deep reinforcement learning models can be trained to deliver the required functionality but are known to suffer from sample inefficiency and slow learning.
We suggest a method, where we synthetically produce populations of agents with different behavioural patterns together with ground truth data of their behaviour.
We additionally suggest an agent architecture, which can efficiently use the generated data and gain the meta-learning capability.
arXiv Detail & Related papers (2021-10-04T09:16:41Z) - Improving Multi-agent Coordination by Learning to Estimate Contention [24.52552750240412]
We present a multi-agent learning algorithm, ALMA-Learning, for efficient and fair allocations in large-scale systems.
ALMA-Learning is decentralized, observes only own action/reward pairs, requires no inter-agent communication, and achieves near-optimal (5% loss) and fair coordination.
arXiv Detail & Related papers (2021-05-09T21:30:48Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.