DRL Enabled Coverage and Capacity Optimization in STAR-RIS Assisted
Networks
- URL: http://arxiv.org/abs/2209.00511v2
- Date: Mon, 24 Jul 2023 15:38:59 GMT
- Title: DRL Enabled Coverage and Capacity Optimization in STAR-RIS Assisted
Networks
- Authors: Xinyu Gao, Wenqiang Yi, Yuanwei Liu, Jianhua Zhang, Ping Zhang
- Abstract summary: A new paradigm in wireless communications, how to analyze the coverage and capacity performance of STAR-RISs becomes essential but challenging.
To solve the coverage and capacity optimization problem in STAR-RIS assisted networks, a multi-objective policy optimization (MO-PPO) algorithm is proposed.
In order to improve the performance of the MO-PPO algorithm, two update strategies, i.e., action-value-based update strategy (AVUS) and loss function-based update strategy (LFUS) are investigated.
- Score: 55.0821435415241
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simultaneously transmitting and reflecting reconfigurable intelligent
surfaces (STAR-RISs) is a promising passive device that contributes to a
full-space coverage via transmitting and reflecting the incident signal
simultaneously. As a new paradigm in wireless communications, how to analyze
the coverage and capacity performance of STAR-RISs becomes essential but
challenging. To solve the coverage and capacity optimization (CCO) problem in
STAR-RIS assisted networks, a multi-objective proximal policy optimization
(MO-PPO) algorithm is proposed to handle long-term benefits than conventional
optimization algorithms. To strike a balance between each objective, the MO-PPO
algorithm provides a set of optimal solutions to form a Pareto front (PF),
where any solution on the PF is regarded as an optimal result. Moreover, in
order to improve the performance of the MO-PPO algorithm, two update
strategies, i.e., action-value-based update strategy (AVUS) and loss
function-based update strategy (LFUS), are investigated. For the AVUS, the
improved point is to integrate the action values of both coverage and capacity
and then update the loss function. For the LFUS, the improved point is only to
assign dynamic weights for both loss functions of coverage and capacity, while
the weights are calculated by a min-norm solver at every update. The numerical
results demonstrated that the investigated update strategies outperform the
fixed weights MO optimization algorithms in different cases, which includes a
different number of sample grids, the number of STAR-RISs, the number of
elements in the STAR-RISs, and the size of STAR-RISs. Additionally, the
STAR-RIS assisted networks achieve better performance than conventional
wireless networks without STAR-RISs. Moreover, with the same bandwidth,
millimeter wave is able to provide higher capacity than sub-6 GHz, but at a
cost of smaller coverage.
Related papers
- Multi-Agent Deep Reinforcement Learning for Energy Efficient Multi-Hop STAR-RIS-Assisted Transmissions [9.462149599416263]
We propose the novel architecture of multi-hop STAR-RISs to achieve a wider range of full-plane service coverage.
The proposed architecture achieves the highest energy efficiency compared to mode switching based STAR-RISs, conventional RISs and deployment without RISs or STAR-RISs.
arXiv Detail & Related papers (2024-07-26T09:35:50Z) - Multiobjective Vehicle Routing Optimization with Time Windows: A Hybrid Approach Using Deep Reinforcement Learning and NSGA-II [52.083337333478674]
This paper proposes a weight-aware deep reinforcement learning (WADRL) approach designed to address the multiobjective vehicle routing problem with time windows (MOVRPTW)
The Non-dominated sorting genetic algorithm-II (NSGA-II) method is then employed to optimize the outcomes produced by the WADRL.
arXiv Detail & Related papers (2024-07-18T02:46:06Z) - Design Optimization of NOMA Aided Multi-STAR-RIS for Indoor Environments: A Convex Approximation Imitated Reinforcement Learning Approach [51.63921041249406]
Non-orthogonal multiple access (NOMA) enables multiple users to share the same frequency band, and simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS)
deploying STAR-RIS indoors presents challenges in interference mitigation, power consumption, and real-time configuration.
A novel network architecture utilizing multiple access points (APs), STAR-RISs, and NOMA is proposed for indoor communication.
arXiv Detail & Related papers (2024-06-19T07:17:04Z) - Coverage and Capacity Optimization in STAR-RISs Assisted Networks: A
Machine Learning Approach [102.00221938474344]
A novel model is proposed for the coverage and capacity optimization of simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs) assisted networks.
A loss function-based update strategy is the core point, which is able to calculate weights for both loss functions of coverage and capacity by a min-norm solver at each update.
The numerical results demonstrate that the investigated update strategy outperforms the fixed weight-based MO algorithms.
arXiv Detail & Related papers (2022-04-13T13:52:22Z) - Energy-Efficient Design for a NOMA assisted STAR-RIS Network with Deep
Reinforcement Learning [78.50920340621677]
Simultaneous transmitting and reconfigurable intelligent surfaces (STAR-RISs) has been considered as a promising auxiliary device to enhance the performance of wireless network.
In this paper, the energy efficiency (EE) problem for a non-orthogonal multiple access (NOMA) network is investigated.
A deep deterministic policy-based algorithm is proposed to maximize the EE by jointly optimizing the transmission beamforming vectors at the base station and the gradient matrices at the STAR-RIS.
arXiv Detail & Related papers (2021-11-30T15:01:19Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.