Aerial Base Station Positioning and Power Control for Securing
Communications: A Deep Q-Network Approach
- URL: http://arxiv.org/abs/2112.11090v1
- Date: Tue, 21 Dec 2021 10:53:58 GMT
- Title: Aerial Base Station Positioning and Power Control for Securing
Communications: A Deep Q-Network Approach
- Authors: Aly Sabri Abdalla, Ali Behfarnia, and Vuk Marojevic
- Abstract summary: UAV will play a critical role in enhancing the physical layer security of wireless networks.
This paper defines the problem of eavesdropping on the link between the ground user and the UAV.
reinforcement learning algorithms Q-learning and deep Q-network (DQN) are proposed for optimizing the position of the ABS and the transmission power.
- Score: 3.234560001579256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The unmanned aerial vehicle (UAV) is one of the technological breakthroughs
that supports a variety of services, including communications. UAV will play a
critical role in enhancing the physical layer security of wireless networks.
This paper defines the problem of eavesdropping on the link between the ground
user and the UAV, which serves as an aerial base station (ABS). The
reinforcement learning algorithms Q-learning and deep Q-network (DQN) are
proposed for optimizing the position of the ABS and the transmission power to
enhance the data rate of the ground user. This increases the secrecy capacity
without the system knowing the location of the eavesdropper. Simulation results
show fast convergence and the highest secrecy capacity of the proposed DQN
compared to Q-learning and baseline approaches.
Related papers
- Low-altitude Friendly-Jamming for Satellite-Maritime Communications via Generative AI-enabled Deep Reinforcement Learning [72.72954660774002]
Low Earth Orbit (LEO) satellites can be used to assist maritime wireless communications for data transmission across wide-ranging areas.
Extensive coverage of LEO satellites, combined with openness of channels, can cause the communication process to suffer from security risks.
This paper presents a low-altitude friendly-jamming LEO satellite-maritime communication system enabled by a unmanned aerial vehicle.
arXiv Detail & Related papers (2025-01-26T10:13:51Z) - UAV Virtual Antenna Array Deployment for Uplink Interference Mitigation in Data Collection Networks [71.23793087286703]
Unmanned aerial vehicles (UAVs) have gained considerable attention as a platform for establishing aerial wireless networks and communications.
This paper explores a novel uplink interference mitigation approach based on the collaborative beamforming (CB) method in multi-UAV network systems.
arXiv Detail & Related papers (2024-12-09T12:56:50Z) - Improved Q-learning based Multi-hop Routing for UAV-Assisted Communication [4.799822253865053]
This paper proposes a novel, Improved Q-learning-based Multi-hop Routing (IQMR) algorithm for optimal UAV-assisted communication systems.
Using Q(lambda) learning for routing decisions, IQMR substantially enhances energy efficiency and network data throughput.
arXiv Detail & Related papers (2024-08-17T06:24:31Z) - Multi-Agent Reinforcement Learning for Offloading Cellular Communications with Cooperating UAVs [21.195346908715972]
Unmanned aerial vehicles present an alternative means to offload data traffic from terrestrial BSs.
This paper presents a novel approach to efficiently serve multiple UAVs for data offloading from terrestrial BSs.
arXiv Detail & Related papers (2024-02-05T12:36:08Z) - Label-free Deep Learning Driven Secure Access Selection in
Space-Air-Ground Integrated Networks [26.225658457052834]
In space-air-ground integrated networks (SAGIN), the inherent openness and extensive broadcast coverage expose these networks to significant eavesdropping threats.
It is challenging to conduct a secrecy-oriented access strategy due to both heterogeneous resources and different eavesdropping models.
We propose a Q-network approximation based deep learning approach for selecting the optimal access strategy for maximizing the sum secrecy rate.
arXiv Detail & Related papers (2023-08-28T06:48:06Z) - Optimization for Master-UAV-powered Auxiliary-Aerial-IRS-assisted IoT
Networks: An Option-based Multi-agent Hierarchical Deep Reinforcement
Learning Approach [56.84948632954274]
This paper investigates a master unmanned aerial vehicle (MUAV)-powered Internet of Things (IoT) network.
We propose using a rechargeable auxiliary UAV (AUAV) equipped with an intelligent reflecting surface (IRS) to enhance the communication signals from the MUAV.
Under the proposed model, we investigate the optimal collaboration strategy of these energy-limited UAVs to maximize the accumulated throughput of the IoT network.
arXiv Detail & Related papers (2021-12-20T15:45:28Z) - A Comprehensive Overview on 5G-and-Beyond Networks with UAVs: From
Communications to Sensing and Intelligence [152.89360859658296]
5G networks need to support three typical usage scenarios, namely, enhanced mobile broadband (eMBB), ultra-reliable low-latency communications (URLLC) and massive machine-type communications (mMTC)
On the one hand, UAVs can be leveraged as cost-effective aerial platforms to provide ground users with enhanced communication services by exploiting their high cruising altitude and controllable maneuverability in 3D space.
On the other hand, providing such communication services simultaneously for both UAV and ground users poses new challenges due to the need for ubiquitous 3D signal coverage as well as the strong air-ground network interference.
arXiv Detail & Related papers (2020-10-19T08:56:04Z) - UAV Path Planning for Wireless Data Harvesting: A Deep Reinforcement
Learning Approach [18.266087952180733]
We propose a new end-to-end reinforcement learning approach to UAV-enabled data collection from Internet of Things (IoT) devices.
An autonomous drone is tasked with gathering data from distributed sensor nodes subject to limited flying time and obstacle avoidance.
We show that our proposed network architecture enables the agent to make movement decisions for a variety of scenario parameters.
arXiv Detail & Related papers (2020-07-01T15:14:16Z) - Data Freshness and Energy-Efficient UAV Navigation Optimization: A Deep
Reinforcement Learning Approach [88.45509934702913]
We design a navigation policy for multiple unmanned aerial vehicles (UAVs) where mobile base stations (BSs) are deployed.
We incorporate different contextual information such as energy and age of information (AoI) constraints to ensure the data freshness at the ground BS.
By applying the proposed trained model, an effective real-time trajectory policy for the UAV-BSs captures the observable network states over time.
arXiv Detail & Related papers (2020-02-21T07:29:15Z) - Artificial Intelligence Aided Next-Generation Networks Relying on UAVs [140.42435857856455]
Artificial intelligence (AI) assisted unmanned aerial vehicle (UAV) aided next-generation networking is proposed for dynamic environments.
In the AI-enabled UAV-aided wireless networks (UAWN), multiple UAVs are employed as aerial base stations, which are capable of rapidly adapting to the dynamic environment.
As a benefit of the AI framework, several challenges of conventional UAWN may be circumvented, leading to enhanced network performance, improved reliability and agile adaptivity.
arXiv Detail & Related papers (2020-01-28T15:10:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.