Efficient decentralized multi-agent learning in asymmetric bipartite
queueing systems
- URL: http://arxiv.org/abs/2206.03324v3
- Date: Sat, 5 Aug 2023 16:32:09 GMT
- Title: Efficient decentralized multi-agent learning in asymmetric bipartite
queueing systems
- Authors: Daniel Freund and Thodoris Lykouris and Wentao Weng
- Abstract summary: We study decentralized multi-agent learning in bipartite queueing systems.
In particular, N agents request service from K servers in a fully decentralized way.
We provide a simple learning algorithm that, when run decentrally by each agent, leads the queueing system to have efficient performance.
- Score: 6.069611493148631
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study decentralized multi-agent learning in bipartite queueing systems, a
standard model for service systems. In particular, N agents request service
from K servers in a fully decentralized way, i.e, by running the same algorithm
without communication. Previous decentralized algorithms are restricted to
symmetric systems, have performance that is degrading exponentially in the
number of servers, require communication through shared randomness and unique
agent identities, and are computationally demanding. In contrast, we provide a
simple learning algorithm that, when run decentrally by each agent, leads the
queueing system to have efficient performance in general asymmetric bipartite
queueing systems while also having additional robustness properties. Along the
way, we provide the first provably efficient UCB-based algorithm for the
centralized case of the problem.
Related papers
- Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - Asynchronous Parallel Incremental Block-Coordinate Descent for
Decentralized Machine Learning [55.198301429316125]
Machine learning (ML) is a key technique for big-data-driven modelling and analysis of massive Internet of Things (IoT) based intelligent and ubiquitous computing.
For fast-increasing applications and data amounts, distributed learning is a promising emerging paradigm since it is often impractical or inefficient to share/aggregate data.
This paper studies the problem of training an ML model over decentralized systems, where data are distributed over many user devices.
arXiv Detail & Related papers (2022-02-07T15:04:15Z) - Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in
Edge Industrial IoT [106.83952081124195]
Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes.
We propose an adaptive ADMM (asI-ADMM) algorithm and apply it to decentralized RL with edge-computing-empowered IIoT networks.
Experiment results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability, and can well adapt to complex IoT environments.
arXiv Detail & Related papers (2021-06-30T16:49:07Z) - Distributed Algorithms for Linearly-Solvable Optimal Control in
Networked Multi-Agent Systems [15.782670973813774]
A distributed framework is proposed to partition the optimal control problem of a networked MAS into several local optimal control problems.
For discrete-time systems, the joint Bellman equation of each subsystem is transformed into a system of linear equations.
For continuous-time systems, the joint optimality equation of each subsystem is converted into a linear partial differential equation.
arXiv Detail & Related papers (2021-02-18T01:31:17Z) - Decentralized Deep Learning using Momentum-Accelerated Consensus [15.333413663982874]
We consider the problem of decentralized deep learning where multiple agents collaborate to learn from a distributed dataset.
We propose and analyze a novel decentralized deep learning algorithm where the agents interact over a fixed communication topology.
Our algorithm is based on the heavy-ball acceleration method used in gradient-based protocol.
arXiv Detail & Related papers (2020-10-21T17:39:52Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - A Low Complexity Decentralized Neural Net with Centralized Equivalence
using Layer-wise Learning [49.15799302636519]
We design a low complexity decentralized learning algorithm to train a recently proposed large neural network in distributed processing nodes (workers)
In our setup, the training data is distributed among the workers but is not shared in the training process due to privacy and security concerns.
We show that it is possible to achieve equivalent learning performance as if the data is available in a single place.
arXiv Detail & Related papers (2020-09-29T13:08:12Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.