Learning Optimal Antenna Tilt Control Policies: A Contextual Linear
Bandit Approach
- URL: http://arxiv.org/abs/2201.02169v1
- Date: Thu, 6 Jan 2022 18:24:30 GMT
- Title: Learning Optimal Antenna Tilt Control Policies: A Contextual Linear
Bandit Approach
- Authors: Filippo Vannella, Alexandre Proutiere, Yassir Jedra, Jaeseong Jeong
- Abstract summary: Controlling antenna tilts in cellular networks is imperative to reach an efficient trade-off between network coverage and capacity.
We devise algorithms learning optimal tilt control policies from existing data.
We show that they can produce optimal tilt update policy using much fewer data samples than naive or existing rule-based learning algorithms.
- Score: 65.27783264330711
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Controlling antenna tilts in cellular networks is imperative to reach an
efficient trade-off between network coverage and capacity. In this paper, we
devise algorithms learning optimal tilt control policies from existing data (in
the so-called passive learning setting) or from data actively generated by the
algorithms (the active learning setting). We formalize the design of such
algorithms as a Best Policy Identification (BPI) problem in Contextual Linear
Multi-Arm Bandits (CL-MAB). An arm represents an antenna tilt update; the
context captures current network conditions; the reward corresponds to an
improvement of performance, mixing coverage and capacity; and the objective is
to identify, with a given level of confidence, an approximately optimal policy
(a function mapping the context to an arm with maximal reward). For CL-MAB in
both active and passive learning settings, we derive information-theoretical
lower bounds on the number of samples required by any algorithm returning an
approximately optimal policy with a given level of certainty, and devise
algorithms achieving these fundamental limits. We apply our algorithms to the
Remote Electrical Tilt (RET) optimization problem in cellular networks, and
show that they can produce optimal tilt update policy using much fewer data
samples than naive or existing rule-based learning algorithms.
Related papers
- Probabilistic Reach-Avoid for Bayesian Neural Networks [71.67052234622781]
We show that an optimal synthesis algorithm can provide more than a four-fold increase in the number of certifiable states.
The algorithm is able to provide more than a three-fold increase in the average guaranteed reach-avoid probability.
arXiv Detail & Related papers (2023-10-03T10:52:21Z) - Iteratively Refined Behavior Regularization for Offline Reinforcement
Learning [57.10922880400715]
In this paper, we propose a new algorithm that substantially enhances behavior-regularization based on conservative policy iteration.
By iteratively refining the reference policy used for behavior regularization, conservative policy update guarantees gradually improvement.
Experimental results on the D4RL benchmark indicate that our method outperforms previous state-of-the-art baselines in most tasks.
arXiv Detail & Related papers (2023-06-09T07:46:24Z) - Multi-agent Reinforcement Learning with Graph Q-Networks for Antenna
Tuning [60.94661435297309]
The scale of mobile networks makes it challenging to optimize antenna parameters using manual intervention or hand-engineered strategies.
We propose a new multi-agent reinforcement learning algorithm to optimize mobile network configurations globally.
We empirically demonstrate the performance of the algorithm on an antenna tilt tuning problem and a joint tilt and power control problem in a simulated environment.
arXiv Detail & Related papers (2023-01-20T17:06:34Z) - Value Enhancement of Reinforcement Learning via Efficient and Robust
Trust Region Optimization [14.028916306297928]
Reinforcement learning (RL) is a powerful machine learning technique that enables an intelligent agent to learn an optimal policy.
We propose a novel value enhancement method to improve the performance of a given initial policy computed by existing state-of-the-art RL algorithms.
arXiv Detail & Related papers (2023-01-05T18:43:40Z) - Offline Neural Contextual Bandits: Pessimism, Optimization and
Generalization [42.865641215856925]
We propose a provably efficient offline contextual bandit with neural network function approximation.
We show that our method generalizes over unseen contexts under a milder condition for distributional shift than the existing OPL works.
We also demonstrate the empirical effectiveness of our method in a range of synthetic and real-world OPL problems.
arXiv Detail & Related papers (2021-11-27T03:57:13Z) - Neural Network Compatible Off-Policy Natural Actor-Critic Algorithm [16.115903198836694]
Learning optimal behavior from existing data is one of the most important problems in Reinforcement Learning (RL)
This is known as "off-policy control" in RL where an agent's objective is to compute an optimal policy based on the data obtained from the given policy (known as the behavior policy)
This work proposes an off-policy natural actor-critic algorithm that utilizes state-action distribution correction for handling the off-policy behavior and the natural policy gradient for sample efficiency.
arXiv Detail & Related papers (2021-10-19T14:36:45Z) - Breaking the Deadly Triad with a Target Network [80.82586530205776]
The deadly triad refers to the instability of a reinforcement learning algorithm when it employs off-policy learning, function approximation, and bootstrapping simultaneously.
We provide the first convergent linear $Q$-learning algorithms under nonrestrictive and changing behavior policies without bi-level optimization.
arXiv Detail & Related papers (2021-01-21T21:50:10Z) - Approximate Midpoint Policy Iteration for Linear Quadratic Control [1.0312968200748118]
We present a midpoint policy iteration algorithm to solve linear quadratic optimal control problems in both model-based and model-free settings.
We show that in the model-based setting it achieves cubic convergence, which is superior to standard policy iteration and policy algorithms that achieve quadratic and linear convergence.
arXiv Detail & Related papers (2020-11-28T20:22:10Z) - Off-policy Learning for Remote Electrical Tilt Optimization [68.8204255655161]
We address the problem of Remote Electrical Tilt (RET) optimization using off-policy Contextual Multi-Armed-Bandit (CMAB) techniques.
We propose CMAB learning algorithms to extract optimal tilt update policies from the data.
Our policies show consistent improvements over the rule-based logging policy used to collect the data.
arXiv Detail & Related papers (2020-05-21T11:30:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.