Hierarchical Reinforcement Learning for Relay Selection and Power
Optimization in Two-Hop Cooperative Relay Network
- URL: http://arxiv.org/abs/2011.04891v2
- Date: Thu, 28 Jan 2021 13:10:47 GMT
- Title: Hierarchical Reinforcement Learning for Relay Selection and Power
Optimization in Two-Hop Cooperative Relay Network
- Authors: Yuanzhe Geng, Erwu Liu, Rui Wang, and Yiming Liu
- Abstract summary: We study the outage probability minimizing problem subjected to a total transmission power constraint in a two-hop cooperative relay network.
We use reinforcement learning (RL) methods to learn strategies for relay selection and power allocation.
We propose a hierarchical reinforcement learning (HRL) framework and training algorithm.
- Score: 7.5377621697101205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cooperative communication is an effective approach to improve spectrum
utilization. In order to reduce outage probability of communication system,
most studies propose various schemes for relay selection and power allocation,
which are based on the assumption of channel state information (CSI). However,
it is difficult to get an accurate CSI in practice. In this paper, we study the
outage probability minimizing problem subjected to a total transmission power
constraint in a two-hop cooperative relay network. We use reinforcement
learning (RL) methods to learn strategies for relay selection and power
allocation, which do not need any prior knowledge of CSI but simply rely on the
interaction with communication environment. It is noted that conventional RL
methods, including most deep reinforcement learning (DRL) methods, cannot
perform well when the search space is too large. Therefore, we first propose a
DRL framework with an outage-based reward function, which is then used as a
baseline. Then, we further propose a hierarchical reinforcement learning (HRL)
framework and training algorithm. A key difference from other RL-based methods
in existing literatures is that, our proposed HRL approach decomposes relay
selection and power allocation into two hierarchical optimization objectives,
which are trained in different levels. With the simplification of search space,
the HRL approach can solve the problem of sparse reward, while the conventional
RL method fails. Simulation results reveal that compared with traditional DRL
method, the HRL training algorithm can reach convergence 30 training iterations
earlier and reduce the outage probability by 5% in two-hop relay network with
the same outage threshold.
Related papers
- Enhancing Spectrum Efficiency in 6G Satellite Networks: A GAIL-Powered Policy Learning via Asynchronous Federated Inverse Reinforcement Learning [67.95280175998792]
A novel adversarial imitation learning (GAIL)-powered policy learning approach is proposed for optimizing beamforming, spectrum allocation, and remote user equipment (RUE) association ins.
We employ inverse RL (IRL) to automatically learn reward functions without manual tuning.
We show that the proposed MA-AL method outperforms traditional RL approaches, achieving a $14.6%$ improvement in convergence and reward value.
arXiv Detail & Related papers (2024-09-27T13:05:02Z) - Event-Triggered Reinforcement Learning Based Joint Resource Allocation for Ultra-Reliable Low-Latency V2X Communications [10.914558012458425]
6G-enabled vehicular networks face the challenge ensuring low-latency communication (URLLC) for delivering safety-critical information in a timely manner.
Traditional resource allocation schemes for vehicle-to-everything (V2X) communication systems rely on traditional decoding-based algorithms.
arXiv Detail & Related papers (2024-07-18T23:55:07Z) - Hybrid Inverse Reinforcement Learning [34.793570631021005]
inverse reinforcement learning approach to imitation learning is a double-edged sword.
We propose using hybrid RL -- training on a mixture of online and expert data -- to curtail unnecessary exploration.
We derive both model-free and model-based hybrid inverse RL algorithms with strong policy performance guarantees.
arXiv Detail & Related papers (2024-02-13T23:29:09Z) - Safe and Accelerated Deep Reinforcement Learning-based O-RAN Slicing: A
Hybrid Transfer Learning Approach [20.344810727033327]
We propose and design a hybrid TL-aided approach to provide safe and accelerated convergence in DRL-based O-RAN slicing.
The proposed hybrid approach shows at least: 7.7% and 20.7% improvements in the average initial reward value and the percentage of converged scenarios.
arXiv Detail & Related papers (2023-09-13T18:58:34Z) - Provable Reward-Agnostic Preference-Based Reinforcement Learning [61.39541986848391]
Preference-based Reinforcement Learning (PbRL) is a paradigm in which an RL agent learns to optimize a task using pair-wise preference-based feedback over trajectories.
We propose a theoretical reward-agnostic PbRL framework where exploratory trajectories that enable accurate learning of hidden reward functions are acquired.
arXiv Detail & Related papers (2023-05-29T15:00:09Z) - Meta Reinforcement Learning with Successor Feature Based Context [51.35452583759734]
We propose a novel meta-RL approach that achieves competitive performance comparing to existing meta-RL algorithms.
Our method does not only learn high-quality policies for multiple tasks simultaneously but also can quickly adapt to new tasks with a small amount of training.
arXiv Detail & Related papers (2022-07-29T14:52:47Z) - Semantic-Aware Collaborative Deep Reinforcement Learning Over Wireless
Cellular Networks [82.02891936174221]
Collaborative deep reinforcement learning (CDRL) algorithms in which multiple agents can coordinate over a wireless network is a promising approach.
In this paper, a novel semantic-aware CDRL method is proposed to enable a group of untrained agents with semantically-linked DRL tasks to collaborate efficiently across a resource-constrained wireless cellular network.
arXiv Detail & Related papers (2021-11-23T18:24:47Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - A Simple Reward-free Approach to Constrained Reinforcement Learning [33.813302183231556]
This paper bridges reward-free RL and constrained RL. Particularly, we propose a simple meta-algorithm such that given any reward-free RL oracle, the approachability and constrained RL problems can be directly solved with negligible overheads in sample complexity.
arXiv Detail & Related papers (2021-07-12T06:27:30Z) - RL-DARTS: Differentiable Architecture Search for Reinforcement Learning [62.95469460505922]
We introduce RL-DARTS, one of the first applications of Differentiable Architecture Search (DARTS) in reinforcement learning (RL)
By replacing the image encoder with a DARTS supernet, our search method is sample-efficient, requires minimal extra compute resources, and is also compatible with off-policy and on-policy RL algorithms, needing only minor changes in preexisting code.
We show that the supernet gradually learns better cells, leading to alternative architectures which can be highly competitive against manually designed policies, but also verify previous design choices for RL policies.
arXiv Detail & Related papers (2021-06-04T03:08:43Z) - FOCAL: Efficient Fully-Offline Meta-Reinforcement Learning via Distance
Metric Learning and Behavior Regularization [10.243908145832394]
We study the offline meta-reinforcement learning (OMRL) problem, a paradigm which enables reinforcement learning (RL) algorithms to quickly adapt to unseen tasks.
This problem is still not fully understood, for which two major challenges need to be addressed.
We provide analysis and insight showing that some simple design choices can yield substantial improvements over recent approaches.
arXiv Detail & Related papers (2020-10-02T17:13:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.