The Power of Communication in a Distributed Multi-Agent System
- URL: http://arxiv.org/abs/2111.15611v2
- Date: Wed, 1 Dec 2021 11:41:13 GMT
- Title: The Power of Communication in a Distributed Multi-Agent System
- Authors: Philipp Dominic Siedler
- Abstract summary: Single-Agent (SA) Reinforcement Learning systems have shown outstanding re-sults on non-stationary problems.
Multi-Agent Reinforcement Learning(MARL) can surpass SA systems generally and when scaling.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single-Agent (SA) Reinforcement Learning systems have shown outstanding
re-sults on non-stationary problems. However, Multi-Agent Reinforcement
Learning(MARL) can surpass SA systems generally and when scaling. Furthermore,
MAsystems can be super-powered by collaboration, which can happen through
ob-serving others, or a communication system used to share information
betweencollaborators. Here, we developed a distributed MA learning mechanism
withthe ability to communicate based on decentralised partially observable
Markovdecision processes (Dec-POMDPs) and Graph Neural Networks (GNNs).
Minimis-ing the time and energy consumed by training Machine Learning models
whileimproving performance can be achieved by collaborative MA mechanisms.
Wedemonstrate this in a real-world scenario, an offshore wind farm, including a
set ofdistributed wind turbines, where the objective is to maximise collective
efficiency.Compared to a SA system, MA collaboration has shown significantly
reducedtraining time and higher cumulative rewards in unseen and scaled
scenarios.
Related papers
- Cooperative Multi-Agent Planning with Adaptive Skill Synthesis [16.228784877899976]
Multi-agent systems with reinforcement learning face challenges in sample efficiency, interpretability, and transferability.
We present a novel multi-agent architecture that integrates vision-language models (VLMs) with a dynamic skill library and structured communication for decentralized closed-loop decision-making.
arXiv Detail & Related papers (2025-02-14T13:23:18Z) - Distributed Multi-Head Learning Systems for Power Consumption Prediction [59.293903039988884]
We propose Distributed Multi-Head learning (DMH) systems for power consumption prediction in smart factories.
DMH systems are designed as distributed and split learning, reducing the client-to-server transmission cost.
DMH-E system reduces the error of the state-of-the-art systems by 14.5% to 24.0%.
arXiv Detail & Related papers (2025-01-21T13:46:23Z) - HiMAP: Learning Heuristics-Informed Policies for Large-Scale Multi-Agent
Pathfinding [16.36594480478895]
Heuristics-Informed Multi-Agent Pathfinding (HiMAP)
Heuristics-Informed Multi-Agent Pathfinding (HiMAP)
arXiv Detail & Related papers (2024-02-23T13:01:13Z) - Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps [56.827895559823126]
Self-organizing map (SOM) is a neural model often used in clustering and dimensionality reduction.
We propose a generalization of the SOM, the continual SOM, which is capable of online unsupervised learning under a low memory budget.
Our results, on benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a two times increase in accuracy.
arXiv Detail & Related papers (2024-02-19T19:11:22Z) - Learning Reward Machines in Cooperative Multi-Agent Tasks [75.79805204646428]
This paper presents a novel approach to Multi-Agent Reinforcement Learning (MARL)
It combines cooperative task decomposition with the learning of reward machines (RMs) encoding the structure of the sub-tasks.
The proposed method helps deal with the non-Markovian nature of the rewards in partially observable environments.
arXiv Detail & Related papers (2023-03-24T15:12:28Z) - Scalable Multi-Agent Model-Based Reinforcement Learning [1.95804735329484]
We propose a new method called MAMBA which utilizes Model-Based Reinforcement Learning (MBRL) to further leverage centralized training in cooperative environments.
We argue that communication between agents is enough to sustain a world model for each agent during execution phase while imaginary rollouts can be used for training, removing the necessity to interact with the environment.
arXiv Detail & Related papers (2022-05-25T08:35:00Z) - Collaborative Auto-Curricula Multi-Agent Reinforcement Learning with
Graph Neural Network Communication Layer for Open-ended Wildfire-Management
Resource Distribution [0.0]
We build on a recently proposed Multi-Agent Reinforcement Learning (MARL) mechanism with a Graph Neural Network (GNN) communication layer.
We conduct our study in the context of resource distribution for wildfire management.
Our MA communication proposal outperforms a Greedy Heuristic Baseline and a Single-Agent (SA) setup.
arXiv Detail & Related papers (2022-04-24T20:13:30Z) - CTDS: Centralized Teacher with Decentralized Student for Multi-Agent
Reinforcement Learning [114.69155066932046]
This work proposes a novel.
Teacher with Decentralized Student (C TDS) framework, which consists of a teacher model and a student model.
Specifically, the teacher model allocates the team reward by learning individual Q-values conditioned on global observation.
The student model utilizes the partial observations to approximate the Q-values estimated by the teacher model.
arXiv Detail & Related papers (2022-03-16T06:03:14Z) - Opportunities of Federated Learning in Connected, Cooperative and
Automated Industrial Systems [44.627847349764664]
Next-generation industrial systems have driven advances in ultra-reliable, low latency communications.
Distributed machine learning (FL) represents a mushrooming multidisciplinary research area weaving in sensing, communication and learning.
This article explores emerging opportunities of FL for the next-generation networked industrial systems.
arXiv Detail & Related papers (2021-01-09T14:27:52Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - Self-organizing Democratized Learning: Towards Large-scale Distributed
Learning Systems [71.14339738190202]
democratized learning (Dem-AI) lays out a holistic philosophy with underlying principles for building large-scale distributed and democratized machine learning systems.
Inspired by Dem-AI philosophy, a novel distributed learning approach is proposed in this paper.
The proposed algorithms demonstrate better results in the generalization performance of learning models in agents compared to the conventional FL algorithms.
arXiv Detail & Related papers (2020-07-07T08:34:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.