Offline Contextual Bandits for Wireless Network Optimization
- URL: http://arxiv.org/abs/2111.08587v1
- Date: Thu, 11 Nov 2021 11:31:20 GMT
- Title: Offline Contextual Bandits for Wireless Network Optimization
- Authors: Miguel Suau, Alexandros Agapitos, David Lynch, Derek Farrell, Mingqi
Zhou, Aleksandar Milenovic
- Abstract summary: In this paper, we investigate how to learn policies that can automatically adjust the configuration parameters of every cell in the network in response to the changes in the user demand.
Our solution combines existent methods for offline learning and adapts them in a principled way to overcome crucial challenges arising in this context.
- Score: 107.24086150482843
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The explosion in mobile data traffic together with the ever-increasing
expectations for higher quality of service call for the development of AI
algorithms for wireless network optimization. In this paper, we investigate how
to learn policies that can automatically adjust the configuration parameters of
every cell in the network in response to the changes in the user demand. Our
solution combines existent methods for offline learning and adapts them in a
principled way to overcome crucial challenges arising in this context.
Empirical results suggest that our proposed method will achieve important
performance gains when deployed in the real network while satisfying practical
constrains on computational efficiency.
Related papers
- Slicing for AI: An Online Learning Framework for Network Slicing Supporting AI Services [5.80147190706865]
6G networks will embrace a new realm of AI-driven services that requires innovative network slicing strategies.
This paper proposes an online learning framework to optimize the allocation of computational and communication resources to AI services.
arXiv Detail & Related papers (2024-10-20T14:38:54Z) - DRL Optimization Trajectory Generation via Wireless Network Intent-Guided Diffusion Models for Optimizing Resource Allocation [58.62766376631344]
We propose a customized wireless network intent (WNI-G) model to address different state variations of wireless communication networks.
Extensive simulation achieves greater stability in spectral efficiency and variations of traditional DRL models in dynamic communication systems.
arXiv Detail & Related papers (2024-10-18T14:04:38Z) - Continual Model-based Reinforcement Learning for Data Efficient Wireless Network Optimisation [73.04087903322237]
We formulate throughput optimisation as Continual Reinforcement Learning of control policies.
Simulation results suggest that the proposed system is able to shorten the end-to-end deployment lead-time by two-fold.
arXiv Detail & Related papers (2024-04-30T11:23:31Z) - Multi-Agent Reinforcement Learning for Power Control in Wireless
Networks via Adaptive Graphs [1.1861167902268832]
Multi-agent deep reinforcement learning (MADRL) has emerged as a promising method to address a wide range of complex optimization problems like power control.
We present the use of graphs as communication-inducing structures among distributed agents as an effective means to mitigate these challenges.
arXiv Detail & Related papers (2023-11-27T14:25:40Z) - Learning to Transmit with Provable Guarantees in Wireless Federated
Learning [40.11488246920875]
We propose a novel data-driven approach to allocate transmit power for federated learning (FL) over interference-limited wireless networks.
The proposed method is useful in challenging scenarios where the wireless channel is changing during the FL training process.
Ultimately, our goal is to improve the accuracy and efficiency of the global FL model being trained.
arXiv Detail & Related papers (2023-04-18T22:28:03Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Multi-agent Reinforcement Learning with Graph Q-Networks for Antenna
Tuning [60.94661435297309]
The scale of mobile networks makes it challenging to optimize antenna parameters using manual intervention or hand-engineered strategies.
We propose a new multi-agent reinforcement learning algorithm to optimize mobile network configurations globally.
We empirically demonstrate the performance of the algorithm on an antenna tilt tuning problem and a joint tilt and power control problem in a simulated environment.
arXiv Detail & Related papers (2023-01-20T17:06:34Z) - Phase Shift Design in RIS Empowered Wireless Networks: From Optimization
to AI-Based Methods [83.98961686408171]
Reconfigurable intelligent surfaces (RISs) have a revolutionary capability to customize the radio propagation environment for wireless networks.
To fully exploit the advantages of RISs in wireless systems, the phases of the reflecting elements must be jointly designed with conventional communication resources.
This paper provides a review of current optimization methods and artificial intelligence-based methods for handling the constraints imposed by RIS.
arXiv Detail & Related papers (2022-04-28T09:26:14Z) - Cellular traffic offloading via Opportunistic Networking with
Reinforcement Learning [0.5758073912084364]
We propose an adaptive offloading solution based on the Reinforcement Learning framework.
We evaluate and compare the performance of two well-known learning algorithms: Actor-Critic and Q-Learning.
Our solution achieves a higher level of offloading with respect to other state-of-the-art approaches.
arXiv Detail & Related papers (2021-10-01T13:34:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.