Transmit Power Control for Indoor Small Cells: A Method Based on
Federated Reinforcement Learning
- URL: http://arxiv.org/abs/2209.13536v1
- Date: Wed, 31 Aug 2022 14:46:09 GMT
- Title: Transmit Power Control for Indoor Small Cells: A Method Based on
Federated Reinforcement Learning
- Authors: Peizheng Li, Hakan Erdol, Keith Briggs, Xiaoyang Wang, Robert
Piechocki, Abdelrahim Ahmad, Rui Inacio, Shipra Kapoor, Angela Doufexi, Arjun
Parekh
- Abstract summary: This paper proposes a distributed cell power-control scheme based on Federated Reinforcement Learning (FRL)
Models in different indoor environments are aggregated to the global model during the training process, and then the central server broadcasts the updated model back to each client.
The results of the generalisation test show that using the FRL model as the base model improves the convergence speed of the model in the new environment.
- Score: 2.392377380146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Setting the transmit power setting of 5G cells has been a long-term topic of
discussion, as optimized power settings can help reduce interference and
improve the quality of service to users. Recently, machine learning (ML)-based,
especially reinforcement learning (RL)-based control methods have received much
attention. However, there is little discussion about the generalisation ability
of the trained RL models. This paper points out that an RL agent trained in a
specific indoor environment is room-dependent, and cannot directly serve new
heterogeneous environments. Therefore, in the context of Open Radio Access
Network (O-RAN), this paper proposes a distributed cell power-control scheme
based on Federated Reinforcement Learning (FRL). Models in different indoor
environments are aggregated to the global model during the training process,
and then the central server broadcasts the updated model back to each client.
The model will also be used as the base model for adaptive training in the new
environment. The simulation results show that the FRL model has similar
performance to a single RL agent, and both are better than the random power
allocation method and exhaustive search method. The results of the
generalisation test show that using the FRL model as the base model improves
the convergence speed of the model in the new environment.
Related papers
- Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [63.31328039424469]
This tutorial provides a comprehensive survey of methods for fine-tuning diffusion models to optimize downstream reward functions.
We explain the application of various RL algorithms, including PPO, differentiable optimization, reward-weighted MLE, value-weighted sampling, and path consistency learning.
arXiv Detail & Related papers (2024-07-18T17:35:32Z) - Adding Conditional Control to Diffusion Models with Reinforcement Learning [59.295203871547336]
Diffusion models are powerful generative models that allow for precise control over the characteristics of the generated samples.
This work presents a novel method based on reinforcement learning (RL) to add additional controls, leveraging an offline dataset.
arXiv Detail & Related papers (2024-06-17T22:00:26Z) - A Unified Framework for Alternating Offline Model Training and Policy
Learning [62.19209005400561]
In offline model-based reinforcement learning, we learn a dynamic model from historically collected data, and utilize the learned model and fixed datasets for policy learning.
We develop an iterative offline MBRL framework, where we maximize a lower bound of the true expected return.
With the proposed unified model-policy learning framework, we achieve competitive performance on a wide range of continuous-control offline reinforcement learning datasets.
arXiv Detail & Related papers (2022-10-12T04:58:51Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - Simplifying Model-based RL: Learning Representations, Latent-space
Models, and Policies with One Objective [142.36200080384145]
We propose a single objective which jointly optimize a latent-space model and policy to achieve high returns while remaining self-consistent.
We demonstrate that the resulting algorithm matches or improves the sample-efficiency of the best prior model-based and model-free RL methods.
arXiv Detail & Related papers (2022-09-18T03:51:58Z) - Out-of-distribution Detection via Frequency-regularized Generative
Models [23.300763504208593]
Deep generative models can assign high likelihood to inputs drawn from outside the training distribution.
In particular, generative models are shown to overly rely on the background information to estimate the likelihood.
We propose a novel frequency-regularized learning FRL framework for OOD detection, which incorporates high-frequency information into training and guides the model to focus on semantically relevant features.
arXiv Detail & Related papers (2022-08-18T22:34:08Z) - RLFlow: Optimising Neural Network Subgraph Transformation with World
Models [0.0]
We propose a model-based agent which learns to optimise the architecture of neural networks by performing a sequence of subgraph transformations to reduce model runtime.
We show our approach can match the performance of state of the art on common convolutional networks and outperform those by up to 5% on transformer-style architectures.
arXiv Detail & Related papers (2022-05-03T11:52:54Z) - Federated Deep Reinforcement Learning for the Distributed Control of
NextG Wireless Networks [16.12495409295754]
Next Generation (NextG) networks are expected to support demanding internet tactile applications such as augmented reality and connected autonomous vehicles.
Data-driven approaches can improve the ability of the network to adapt to the current operating conditions.
Deep RL (DRL) has been shown to achieve good performance even in complex environments.
arXiv Detail & Related papers (2021-12-07T03:13:20Z) - Federated Ensemble Model-based Reinforcement Learning in Edge Computing [21.840086997141498]
Federated learning (FL) is a privacy-preserving distributed machine learning paradigm.
We propose a novel FRL algorithm that effectively incorporates model-based RL and ensemble knowledge distillation into FL for the first time.
Specifically, we utilise FL and knowledge distillation to create an ensemble of dynamics models for clients, and then train the policy by solely using the ensemble model without interacting with the environment.
arXiv Detail & Related papers (2021-09-12T16:19:10Z) - Learning Discrete Energy-based Models via Auxiliary-variable Local
Exploration [130.89746032163106]
We propose ALOE, a new algorithm for learning conditional and unconditional EBMs for discrete structured data.
We show that the energy function and sampler can be trained efficiently via a new variational form of power iteration.
We present an energy model guided fuzzer for software testing that achieves comparable performance to well engineered fuzzing engines like libfuzzer.
arXiv Detail & Related papers (2020-11-10T19:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.