Federated Multi-Agent Actor-Critic Learning for Age Sensitive Mobile
Edge Computing
- URL: http://arxiv.org/abs/2012.14137v2
- Date: Wed, 6 Jan 2021 13:43:32 GMT
- Title: Federated Multi-Agent Actor-Critic Learning for Age Sensitive Mobile
Edge Computing
- Authors: Zheqi Zhu, Shuo Wan, Pingyi Fan, Khaled B. Letaief
- Abstract summary: Mobile edge computing (MEC) introduces a new processing scheme for various distributed communication-computing systems.
We formulate a kind of age-sensitive MEC models and define the average age of information (AoI) minimization problems of interests.
A novel policy based multi-agent deep reinforcement learning (RL) framework, called heterogeneous multi-agent actor critic (H-MAAC), is proposed as a paradigm for joint collaboration.
- Score: 16.49587367235662
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As an emerging technique, mobile edge computing (MEC) introduces a new
processing scheme for various distributed communication-computing systems such
as industrial Internet of Things (IoT), vehicular communication, smart city,
etc. In this work, we mainly focus on the timeliness of the MEC systems where
the freshness of the data and computation tasks is significant. Firstly, we
formulate a kind of age-sensitive MEC models and define the average age of
information (AoI) minimization problems of interests. Then, a novel policy
based multi-agent deep reinforcement learning (RL) framework, called
heterogeneous multi-agent actor critic (H-MAAC), is proposed as a paradigm for
joint collaboration in the investigated MEC systems, where edge devices and
center controller learn the interactive strategies through their own
observations. To improves the system performance, we develop the corresponding
online algorithm by introducing an edge federated learning mode into the
multi-agent cooperation whose advantages on learning convergence can be
guaranteed theoretically. To the best of our knowledge, it's the first joint
MEC collaboration algorithm that combines the edge federated mode with the
multi-agent actor-critic reinforcement learning. Furthermore, we evaluate the
proposed approach and compare it with classical RL based methods. As a result,
the proposed framework not only outperforms the baseline on average system age,
but also promotes the stability of training process. Besides, the simulation
results provide some innovative perspectives for the system design under the
edge federated collaboration.
Related papers
- Asynchronous Fractional Multi-Agent Deep Reinforcement Learning for Age-Minimal Mobile Edge Computing [14.260646140460187]
We study the timeliness of computational-intensive updates and explore jointly optimize the task updating and offloading policies to minimize AoI.
Specifically, we consider edge load dynamics and formulate a task scheduling problem to minimize the expected time-average AoI.
Our proposed algorithms reduce the average AoI by up to 52.6% compared with the best baseline algorithm in our experiments.
arXiv Detail & Related papers (2024-09-25T11:33:32Z) - POGEMA: A Benchmark Platform for Cooperative Multi-Agent Navigation [76.67608003501479]
We introduce and specify an evaluation protocol defining a range of domain-related metrics computed on the basics of the primary evaluation indicators.
The results of such a comparison, which involves a variety of state-of-the-art MARL, search-based, and hybrid methods, are presented.
arXiv Detail & Related papers (2024-07-20T16:37:21Z) - Ensembling Prioritized Hybrid Policies for Multi-agent Pathfinding [18.06081009550052]
Multi-Agent Reinforcement Learning (MARL) based Multi-Agent Path Finding (MAPF) has recently gained attention due to its efficiency and scalability.
Several MARL-MAPF methods choose to use communication to enrich the information one agent can perceive.
We propose a new method, Ensembling Prioritized Hybrid Policies (EPH)
arXiv Detail & Related papers (2024-03-12T11:47:12Z) - Interactive Continual Learning: Fast and Slow Thinking [19.253164551254734]
This paper presents a novel Interactive Continual Learning framework, enabled by collaborative interactions among models of various sizes.
To improve memory retrieval in System1, we introduce the CL-vMF mechanism, based on the von Mises-Fisher (vMF) distribution.
Comprehensive evaluation of our proposed ICL demonstrates significant resistance to forgetting and superior performance relative to existing methods.
arXiv Detail & Related papers (2024-03-05T03:37:28Z) - Inverse Factorized Q-Learning for Cooperative Multi-agent Imitation
Learning [13.060023718506917]
imitation learning (IL) is a problem of learning to mimic expert behaviors from demonstrations in cooperative multi-agent systems.
We introduce a novel multi-agent IL algorithm designed to address these challenges.
Our approach enables the centralized learning by leveraging mixing networks to aggregate decentralized Q functions.
arXiv Detail & Related papers (2023-10-10T17:11:20Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Reconfigurable Intelligent Surface Assisted Mobile Edge Computing with
Heterogeneous Learning Tasks [53.1636151439562]
Mobile edge computing (MEC) provides a natural platform for AI applications.
We present an infrastructure to perform machine learning tasks at an MEC with the assistance of a reconfigurable intelligent surface (RIS)
Specifically, we minimize the learning error of all participating users by jointly optimizing transmit power of mobile users, beamforming vectors of the base station, and the phase-shift matrix of the RIS.
arXiv Detail & Related papers (2020-12-25T07:08:50Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z) - Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles [52.79089414630366]
We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
arXiv Detail & Related papers (2020-07-27T12:38:37Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.