Aerial Base Station Positioning and Power Control for Securing
Communications: A Deep Q-Network Approach
- URL: http://arxiv.org/abs/2112.11090v1
- Date: Tue, 21 Dec 2021 10:53:58 GMT
- Title: Aerial Base Station Positioning and Power Control for Securing
Communications: A Deep Q-Network Approach
- Authors: Aly Sabri Abdalla, Ali Behfarnia, and Vuk Marojevic
- Abstract summary: UAV will play a critical role in enhancing the physical layer security of wireless networks.
This paper defines the problem of eavesdropping on the link between the ground user and the UAV.
reinforcement learning algorithms Q-learning and deep Q-network (DQN) are proposed for optimizing the position of the ABS and the transmission power.
- Score: 3.234560001579256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The unmanned aerial vehicle (UAV) is one of the technological breakthroughs
that supports a variety of services, including communications. UAV will play a
critical role in enhancing the physical layer security of wireless networks.
This paper defines the problem of eavesdropping on the link between the ground
user and the UAV, which serves as an aerial base station (ABS). The
reinforcement learning algorithms Q-learning and deep Q-network (DQN) are
proposed for optimizing the position of the ABS and the transmission power to
enhance the data rate of the ground user. This increases the secrecy capacity
without the system knowing the location of the eavesdropper. Simulation results
show fast convergence and the highest secrecy capacity of the proposed DQN
compared to Q-learning and baseline approaches.
Related papers
- Improved Q-learning based Multi-hop Routing for UAV-Assisted Communication [4.799822253865053]
This paper proposes a novel, Improved Q-learning-based Multi-hop Routing (IQMR) algorithm for optimal UAV-assisted communication systems.
Using Q(lambda) learning for routing decisions, IQMR substantially enhances energy efficiency and network data throughput.
arXiv Detail & Related papers (2024-08-17T06:24:31Z) - Covert Communication for Untrusted UAV-Assisted Wireless Systems [1.2190851745229392]
UAV-assisted covert communication is a supporting technology for improving covert performances.
This paper investigates the performance of joint covert and security communication in a tow-hop UAV-assisted wireless system.
arXiv Detail & Related papers (2024-03-14T15:17:56Z) - Multi-Agent Reinforcement Learning for Offloading Cellular Communications with Cooperating UAVs [21.195346908715972]
Unmanned aerial vehicles present an alternative means to offload data traffic from terrestrial BSs.
This paper presents a novel approach to efficiently serve multiple UAVs for data offloading from terrestrial BSs.
arXiv Detail & Related papers (2024-02-05T12:36:08Z) - Label-free Deep Learning Driven Secure Access Selection in
Space-Air-Ground Integrated Networks [26.225658457052834]
In space-air-ground integrated networks (SAGIN), the inherent openness and extensive broadcast coverage expose these networks to significant eavesdropping threats.
It is challenging to conduct a secrecy-oriented access strategy due to both heterogeneous resources and different eavesdropping models.
We propose a Q-network approximation based deep learning approach for selecting the optimal access strategy for maximizing the sum secrecy rate.
arXiv Detail & Related papers (2023-08-28T06:48:06Z) - Optimization for Master-UAV-powered Auxiliary-Aerial-IRS-assisted IoT
Networks: An Option-based Multi-agent Hierarchical Deep Reinforcement
Learning Approach [56.84948632954274]
This paper investigates a master unmanned aerial vehicle (MUAV)-powered Internet of Things (IoT) network.
We propose using a rechargeable auxiliary UAV (AUAV) equipped with an intelligent reflecting surface (IRS) to enhance the communication signals from the MUAV.
Under the proposed model, we investigate the optimal collaboration strategy of these energy-limited UAVs to maximize the accumulated throughput of the IoT network.
arXiv Detail & Related papers (2021-12-20T15:45:28Z) - RIS-assisted UAV Communications for IoT with Wireless Power Transfer
Using Deep Reinforcement Learning [75.677197535939]
We propose a simultaneous wireless power transfer and information transmission scheme for IoT devices with support from unmanned aerial vehicle (UAV) communications.
In a first phase, IoT devices harvest energy from the UAV through wireless power transfer; and then in a second phase, the UAV collects data from the IoT devices through information transmission.
We formulate a Markov decision process and propose two deep reinforcement learning algorithms to solve the optimization problem of maximizing the total network sum-rate.
arXiv Detail & Related papers (2021-08-05T23:55:44Z) - 3D UAV Trajectory and Data Collection Optimisation via Deep
Reinforcement Learning [75.78929539923749]
Unmanned aerial vehicles (UAVs) are now beginning to be deployed for enhancing the network performance and coverage in wireless communication.
It is challenging to obtain an optimal resource allocation scheme for the UAV-assisted Internet of Things (IoT)
In this paper, we design a new UAV-assisted IoT systems relying on the shortest flight path of the UAVs while maximising the amount of data collected from IoT devices.
arXiv Detail & Related papers (2021-06-06T14:08:41Z) - A Comprehensive Overview on 5G-and-Beyond Networks with UAVs: From
Communications to Sensing and Intelligence [152.89360859658296]
5G networks need to support three typical usage scenarios, namely, enhanced mobile broadband (eMBB), ultra-reliable low-latency communications (URLLC) and massive machine-type communications (mMTC)
On the one hand, UAVs can be leveraged as cost-effective aerial platforms to provide ground users with enhanced communication services by exploiting their high cruising altitude and controllable maneuverability in 3D space.
On the other hand, providing such communication services simultaneously for both UAV and ground users poses new challenges due to the need for ubiquitous 3D signal coverage as well as the strong air-ground network interference.
arXiv Detail & Related papers (2020-10-19T08:56:04Z) - UAV Path Planning for Wireless Data Harvesting: A Deep Reinforcement
Learning Approach [18.266087952180733]
We propose a new end-to-end reinforcement learning approach to UAV-enabled data collection from Internet of Things (IoT) devices.
An autonomous drone is tasked with gathering data from distributed sensor nodes subject to limited flying time and obstacle avoidance.
We show that our proposed network architecture enables the agent to make movement decisions for a variety of scenario parameters.
arXiv Detail & Related papers (2020-07-01T15:14:16Z) - Data Freshness and Energy-Efficient UAV Navigation Optimization: A Deep
Reinforcement Learning Approach [88.45509934702913]
We design a navigation policy for multiple unmanned aerial vehicles (UAVs) where mobile base stations (BSs) are deployed.
We incorporate different contextual information such as energy and age of information (AoI) constraints to ensure the data freshness at the ground BS.
By applying the proposed trained model, an effective real-time trajectory policy for the UAV-BSs captures the observable network states over time.
arXiv Detail & Related papers (2020-02-21T07:29:15Z) - Artificial Intelligence Aided Next-Generation Networks Relying on UAVs [140.42435857856455]
Artificial intelligence (AI) assisted unmanned aerial vehicle (UAV) aided next-generation networking is proposed for dynamic environments.
In the AI-enabled UAV-aided wireless networks (UAWN), multiple UAVs are employed as aerial base stations, which are capable of rapidly adapting to the dynamic environment.
As a benefit of the AI framework, several challenges of conventional UAWN may be circumvented, leading to enhanced network performance, improved reliability and agile adaptivity.
arXiv Detail & Related papers (2020-01-28T15:10:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.