An Offline Multi-Agent Reinforcement Learning Framework for Radio Resource Management
- URL: http://arxiv.org/abs/2501.12991v1
- Date: Wed, 22 Jan 2025 16:25:46 GMT
- Title: An Offline Multi-Agent Reinforcement Learning Framework for Radio Resource Management
- Authors: Eslam Eldeeb, Hirley Alves,
- Abstract summary: offline multi-agent reinforcement learning (MARL) addresses key limitations of online MARL.
We propose an offline MARL algorithm for radio resource management (RRM)
We evaluate three training paradigms: centralized, independent, and centralized training with decentralized execution (CTDE)
- Score: 5.771885923067511
- License:
- Abstract: Offline multi-agent reinforcement learning (MARL) addresses key limitations of online MARL, such as safety concerns, expensive data collection, extended training intervals, and high signaling overhead caused by online interactions with the environment. In this work, we propose an offline MARL algorithm for radio resource management (RRM), focusing on optimizing scheduling policies for multiple access points (APs) to jointly maximize the sum and tail rates of user equipment (UEs). We evaluate three training paradigms: centralized, independent, and centralized training with decentralized execution (CTDE). Our simulation results demonstrate that the proposed offline MARL framework outperforms conventional baseline approaches, achieving over a 15\% improvement in a weighted combination of sum and tail rates. Additionally, the CTDE framework strikes an effective balance, reducing the computational complexity of centralized methods while addressing the inefficiencies of independent training. These results underscore the potential of offline MARL to deliver scalable, robust, and efficient solutions for resource management in dynamic wireless networks.
Related papers
- Offline Critic-Guided Diffusion Policy for Multi-User Delay-Constrained Scheduling [29.431945795881976]
We propose a novel offline reinforcement learning-based algorithm, named underlineScheduling.
It learns efficient scheduling policies purely from pre-collected emphoffline data.
We show that SOCD is resilient to various system dynamics, including partially observable and large-scale environments.
arXiv Detail & Related papers (2025-01-22T15:13:21Z) - Learning for Cross-Layer Resource Allocation in MEC-Aided Cell-Free Networks [71.30914500714262]
Cross-layer resource allocation over mobile edge computing (MEC)-aided cell-free networks can sufficiently exploit the transmitting and computing resources to promote the data rate.
Joint subcarrier allocation and beamforming optimization are investigated for the MEC-aided cell-free network from the perspective of deep learning.
arXiv Detail & Related papers (2024-12-21T10:18:55Z) - Diffusion-based Episodes Augmentation for Offline Multi-Agent Reinforcement Learning [24.501511979962746]
offline multi-agent reinforcement learning (MARL) is increasingly recognized as crucial for effectively deploying RL algorithms in environments where real-time interaction is impractical, risky, or costly.
We present EAQ, Episodes Augmentation guided by Q-total loss, a novel approach for offline MARL framework utilizing diffusion models.
arXiv Detail & Related papers (2024-08-23T14:17:17Z) - Load Balancing in Federated Learning [3.2999744336237384]
Federated Learning (FL) is a decentralized machine learning framework that enables learning from data distributed across multiple remote devices.
This paper proposes a load metric for scheduling policies based on the Age of Information.
We establish the optimal parameters of the Markov chain model and validate our approach through simulations.
arXiv Detail & Related papers (2024-08-01T00:56:36Z) - AlberDICE: Addressing Out-Of-Distribution Joint Actions in Offline
Multi-Agent RL via Alternating Stationary Distribution Correction Estimation [65.4532392602682]
One of the main challenges in offline Reinforcement Learning (RL) is the distribution shift that arises from the learned policy deviating from the data collection policy.
This is often addressed by avoiding out-of-distribution (OOD) actions during policy improvement as their presence can lead to substantial performance degradation.
We introduce AlberDICE, an offline MARL algorithm that performs centralized training of individual agents based on stationary distribution optimization.
arXiv Detail & Related papers (2023-11-03T18:56:48Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Learning-Augmented Decentralized Online Convex Optimization in Networks [40.142341503145275]
This paper studies decentralized online convex optimization in a networked multi-agent system.
It proposes a novel algorithm, Learning-Augmented Decentralized Online optimization (LADO) for individual agents to select actions only based on local online information.
arXiv Detail & Related papers (2023-06-16T19:58:39Z) - A Unified Framework for Alternating Offline Model Training and Policy
Learning [62.19209005400561]
In offline model-based reinforcement learning, we learn a dynamic model from historically collected data, and utilize the learned model and fixed datasets for policy learning.
We develop an iterative offline MBRL framework, where we maximize a lower bound of the true expected return.
With the proposed unified model-policy learning framework, we achieve competitive performance on a wide range of continuous-control offline reinforcement learning datasets.
arXiv Detail & Related papers (2022-10-12T04:58:51Z) - Multi-Agent Reinforcement Learning for Network Load Balancing in Data
Center [4.141301293112916]
This paper presents the network load balancing problem, a challenging real-world task for reinforcement learning methods.
The cooperative network load balancing task is formulated as a Dec-POMDP problem, which naturally induces the MARL methods.
To bridge the reality gap for applying learning-based methods, all methods are directly trained and evaluated on an emulation system.
arXiv Detail & Related papers (2022-01-27T18:47:59Z) - MALib: A Parallel Framework for Population-based Multi-agent
Reinforcement Learning [61.28547338576706]
Population-based multi-agent reinforcement learning (PB-MARL) refers to the series of methods nested with reinforcement learning (RL) algorithms.
We present MALib, a scalable and efficient computing framework for PB-MARL.
arXiv Detail & Related papers (2021-06-05T03:27:08Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.