Reinforcement Learning-based Joint Path and Energy Optimization of
Cellular-Connected Unmanned Aerial Vehicles
- URL: http://arxiv.org/abs/2011.13744v1
- Date: Fri, 27 Nov 2020 14:16:55 GMT
- Title: Reinforcement Learning-based Joint Path and Energy Optimization of
Cellular-Connected Unmanned Aerial Vehicles
- Authors: Arash Hooshmand
- Abstract summary: We have used a reinforcement learning (RL) hierarchically to extend typical short-range path planners to consider battery recharge and solve the problem of UAVs in long missions.
The problem is simulated for the UAV that flies over a large area, and Q-learning algorithm could enable the UAV to find the optimal path and recharge policy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Unmanned Aerial Vehicles (UAVs) have attracted considerable research interest
recently. Especially when it comes to the realm of Internet of Things, the UAVs
with Internet connectivity are one of the main demands. Furthermore, the energy
constraint i.e. battery limit is a bottle-neck of the UAVs that can limit their
applications. We try to address and solve the energy problem. Therefore, a path
planning method for a cellular-connected UAV is proposed that will enable the
UAV to plan its path in an area much larger than its battery range by getting
recharged in certain positions equipped with power stations (PSs). In addition
to the energy constraint, there are also no-fly zones; for example, due to Air
to Air (A2A) and Air to Ground (A2G) interference or for lack of necessary
connectivity that impose extra constraints in the trajectory optimization of
the UAV. No-fly zones determine the infeasible areas that should be avoided. We
have used a reinforcement learning (RL) hierarchically to extend typical
short-range path planners to consider battery recharge and solve the problem of
UAVs in long missions. The problem is simulated for the UAV that flies over a
large area, and Q-learning algorithm could enable the UAV to find the optimal
path and recharge policy.
Related papers
- RL-Based Cargo-UAV Trajectory Planning and Cell Association for Minimum
Handoffs, Disconnectivity, and Energy Consumption [23.734853912297634]
Unmanned aerial vehicle (UAV) is a promising technology for last-mile cargo delivery.
Existing cellular networks were primarily designed to service ground users.
We propose a novel approach for joint cargo-UAV trajectory planning and cell association.
arXiv Detail & Related papers (2023-12-05T04:06:09Z) - UAV Swarm-enabled Collaborative Secure Relay Communications with
Time-domain Colluding Eavesdropper [115.56455278813756]
Unmanned aerial vehicles (UAV) as aerial relays are practically appealing for assisting Internet Things (IoT) network.
In this work, we aim to utilize the UAV to assist secure communication between the UAV base station and terminal terminal devices.
arXiv Detail & Related papers (2023-10-03T11:47:01Z) - Deep Reinforcement Learning for Online Routing of Unmanned Aerial
Vehicles with Wireless Power Transfer [9.296415450289706]
Unmanned aerial vehicle (UAV) plays an vital role in various applications such as delivery, military mission, disaster rescue, communication, etc.
This paper proposes a deep reinforcement learning method to solve the UAV online routing problem with wireless power transfer.
arXiv Detail & Related papers (2022-04-25T07:43:08Z) - Multi-Agent Deep Reinforcement Learning For Optimising Energy Efficiency
of Fixed-Wing UAV Cellular Access Points [3.502112118170715]
We propose a multi-agent deep reinforcement learning approach to optimise the energy efficiency of fixed-wing UAV cellular access points.
In our approach, each UAV is equipped with a Dueling Deep Q-Network (DDQN) agent which can adjust the 3D trajectory of the UAV over a series of timesteps.
arXiv Detail & Related papers (2021-11-03T14:49:17Z) - A Multi-UAV System for Exploration and Target Finding in Cluttered and
GPS-Denied Environments [68.31522961125589]
We propose a framework for a team of UAVs to cooperatively explore and find a target in complex GPS-denied environments with obstacles.
The team of UAVs autonomously navigates, explores, detects, and finds the target in a cluttered environment with a known map.
Results indicate that the proposed multi-UAV system has improvements in terms of time-cost, the proportion of search area surveyed, as well as successful rates for search and rescue missions.
arXiv Detail & Related papers (2021-07-19T12:54:04Z) - 3D UAV Trajectory and Data Collection Optimisation via Deep
Reinforcement Learning [75.78929539923749]
Unmanned aerial vehicles (UAVs) are now beginning to be deployed for enhancing the network performance and coverage in wireless communication.
It is challenging to obtain an optimal resource allocation scheme for the UAV-assisted Internet of Things (IoT)
In this paper, we design a new UAV-assisted IoT systems relying on the shortest flight path of the UAVs while maximising the amount of data collected from IoT devices.
arXiv Detail & Related papers (2021-06-06T14:08:41Z) - Efficient UAV Trajectory-Planning using Economic Reinforcement Learning [65.91405908268662]
We introduce REPlanner, a novel reinforcement learning algorithm inspired by economic transactions to distribute tasks between UAVs.
We formulate the path planning problem as a multi-agent economic game, where agents can cooperate and compete for resources.
As the system computes task distributions via UAV cooperation, it is highly resilient to any change in the swarm size.
arXiv Detail & Related papers (2021-03-03T20:54:19Z) - Federated Learning for Cellular-connected UAVs: Radio Mapping and Path
Planning [2.4366811507669124]
In this paper, we minimize the travel time of the UAVs, ensuring that a probabilistic connectivity constraint is satisfied.
Since the UAVs have different missions and fly over different areas, their collected data carry local information on the network's connectivity.
In the first step, the UAVs collaboratively build a global model of the outage probability in the environment.
In the second step, by using the global model obtained in the first step and rapidly-exploring random trees (RRTs), we propose an algorithm to optimize UAVs' paths.
arXiv Detail & Related papers (2020-08-23T14:55:37Z) - Simultaneous Navigation and Radio Mapping for Cellular-Connected UAV
with Deep Reinforcement Learning [46.55077580093577]
How to achieve ubiquitous 3D communication coverage for UAVs in the sky is a new challenge.
We propose a new coverage-aware navigation approach, which exploits the UAV's controllable mobility to design its navigation/trajectory.
We propose a new framework called simultaneous navigation and radio mapping (SNARM), where the UAV's signal measurement is used to train the deep Q network.
arXiv Detail & Related papers (2020-03-17T08:16:14Z) - Data Freshness and Energy-Efficient UAV Navigation Optimization: A Deep
Reinforcement Learning Approach [88.45509934702913]
We design a navigation policy for multiple unmanned aerial vehicles (UAVs) where mobile base stations (BSs) are deployed.
We incorporate different contextual information such as energy and age of information (AoI) constraints to ensure the data freshness at the ground BS.
By applying the proposed trained model, an effective real-time trajectory policy for the UAV-BSs captures the observable network states over time.
arXiv Detail & Related papers (2020-02-21T07:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.