Sum Rate Maximization in STAR-RIS-UAV-Assisted Networks: A CA-DDPG Approach for Joint Optimization
- URL: http://arxiv.org/abs/2512.01202v1
- Date: Mon, 01 Dec 2025 02:36:00 GMT
- Title: Sum Rate Maximization in STAR-RIS-UAV-Assisted Networks: A CA-DDPG Approach for Joint Optimization
- Authors: Yujie Huang, Haibin Wan, Xiangcheng Li, Tuanfa Qin, Yun Li, Jun Li, Wen Chen,
- Abstract summary: This paper introduces an unmanned aerial vehicle (UAV) to enhance system flexibility and proposes an optimization design for the spectrum efficiency of the STAR-RIS-UAV-assisted wireless communication system.<n>We present a deep reinforcement learning (DRL) algorithm capable of iteratively optimizing beamforming, phase shifts, and UAV positioning to maximize the system's sum rate through continuous interactions with the environment.
- Score: 12.38744459760065
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the rapid advances in programmable materials, reconfigurable intelligent surfaces (RIS) have become a pivotal technology for future wireless communications. The simultaneous transmitting and reflecting reconfigurable intelligent surfaces (STAR-RIS) can both transmit and reflect signals, enabling comprehensive signal control and expanding application scenarios. This paper introduces an unmanned aerial vehicle (UAV) to further enhance system flexibility and proposes an optimization design for the spectrum efficiency of the STAR-RIS-UAV-assisted wireless communication system. We present a deep reinforcement learning (DRL) algorithm capable of iteratively optimizing beamforming, phase shifts, and UAV positioning to maximize the system's sum rate through continuous interactions with the environment. To improve exploration in deterministic policies, we introduce a stochastic perturbation factor, which enhances exploration capabilities. As exploration is strengthened, the algorithm's ability to accurately evaluate the state-action value function becomes critical. Thus, based on the deep deterministic policy gradient (DDPG) algorithm, we propose a convolution-augmented deep deterministic policy gradient (CA-DDPG) algorithm that balances exploration and evaluation to improve the system's sum rate. The simulation results demonstrate that the CA-DDPG algorithm effectively interacts with the environment, optimizing the beamforming matrix, phase shift matrix, and UAV location, thereby improving system capacity and achieving better performance than other algorithms.
Related papers
- STAR-RIS-assisted Collaborative Beamforming for Low-altitude Wireless Networks [58.13757830013997]
Wireless networks based on uncrewed aerial vehicles (UAVs) offer high mobility, flexibility, and coverage for urban communications.<n>They face severe signal attenuation in dense environments due to obstructions.<n>To address this critical issue, we consider introducing collaborative beam of UAVs and omni-directional re-altitude beamforming.
arXiv Detail & Related papers (2025-10-25T01:28:37Z) - Reconfigurable Intelligent Surface Aided Vehicular Edge Computing: Joint Phase-shift Optimization and Multi-User Power Allocation [28.47670676456068]
We introduce the use of Reconfigurable Intelligent Surfaces (RIS), which provide alternative communication pathways to assist vehicular communication.<n>We propose an innovative deep reinforcement learning (DRL) framework that combines the Deep Deterministic Policy Gradient (DDPG) algorithm for optimizing RIS phase-shift coefficients and the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm for optimizing the power allocation of vehicle user (VU)<n> Simulation results show that our proposed scheme outperforms the traditional centralized DDPG, Twin Delayed Deep Deterministic Policy Gradient (TD3) and some typical schemes.
arXiv Detail & Related papers (2024-07-18T03:18:59Z) - Design Optimization of NOMA Aided Multi-STAR-RIS for Indoor Environments: A Convex Approximation Imitated Reinforcement Learning Approach [51.63921041249406]
Non-orthogonal multiple access (NOMA) enables multiple users to share the same frequency band, and simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS)
deploying STAR-RIS indoors presents challenges in interference mitigation, power consumption, and real-time configuration.
A novel network architecture utilizing multiple access points (APs), STAR-RISs, and NOMA is proposed for indoor communication.
arXiv Detail & Related papers (2024-06-19T07:17:04Z) - Reconfigurable Intelligent Surface Assisted VEC Based on Multi-Agent Reinforcement Learning [33.620752444256716]
Vehicular edge computing enables vehicles to perform high-intensity tasks by executing tasks locally or offloading them to nearby edge devices.<n>Reassisted (RIS) is introduced to support vehicle communication and provide alternative communication path.<n>We propose a new deep reinforcement learning framework that employs modified multiagent deep deterministic gradient policy.
arXiv Detail & Related papers (2024-06-17T08:35:32Z) - Active RIS-aided EH-NOMA Networks: A Deep Reinforcement Learning
Approach [66.53364438507208]
An active reconfigurable intelligent surface (RIS)-aided multi-user downlink communication system is investigated.
Non-orthogonal multiple access (NOMA) is employed to improve spectral efficiency, and the active RIS is powered by energy harvesting (EH)
An advanced LSTM based algorithm is developed to predict users' dynamic communication state.
A DDPG based algorithm is proposed to joint control the amplification matrix and phase shift matrix RIS.
arXiv Detail & Related papers (2023-04-11T13:16:28Z) - DRL Enabled Coverage and Capacity Optimization in STAR-RIS Assisted
Networks [55.0821435415241]
A new paradigm in wireless communications, how to analyze the coverage and capacity performance of STAR-RISs becomes essential but challenging.
To solve the coverage and capacity optimization problem in STAR-RIS assisted networks, a multi-objective policy optimization (MO-PPO) algorithm is proposed.
In order to improve the performance of the MO-PPO algorithm, two update strategies, i.e., action-value-based update strategy (AVUS) and loss function-based update strategy (LFUS) are investigated.
arXiv Detail & Related papers (2022-09-01T14:54:36Z) - Energy-Efficient Design for a NOMA assisted STAR-RIS Network with Deep
Reinforcement Learning [78.50920340621677]
Simultaneous transmitting and reconfigurable intelligent surfaces (STAR-RISs) has been considered as a promising auxiliary device to enhance the performance of wireless network.
In this paper, the energy efficiency (EE) problem for a non-orthogonal multiple access (NOMA) network is investigated.
A deep deterministic policy-based algorithm is proposed to maximize the EE by jointly optimizing the transmission beamforming vectors at the base station and the gradient matrices at the STAR-RIS.
arXiv Detail & Related papers (2021-11-30T15:01:19Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z) - Reconfigurable Intelligent Surface Assisted Multiuser MISO Systems
Exploiting Deep Reinforcement Learning [21.770491711632832]
The reconfigurable intelligent surface (RIS) has been speculated as one of the key enabling technologies for the future six generation (6G) wireless communication systems.
In this paper, we investigate the joint design of transmit beamforming matrix at the base station and the phase shift matrix at the RIS, by leveraging recent advances in deep reinforcement learning (DRL)
The proposed algorithm is not only able to learn from the environment and gradually improve its behavior, but also obtains the comparable performance compared with two state-of-the-art benchmarks.
arXiv Detail & Related papers (2020-02-24T04:28:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.