Federated Learning for Cellular-connected UAVs: Radio Mapping and Path
Planning
- URL: http://arxiv.org/abs/2008.10054v1
- Date: Sun, 23 Aug 2020 14:55:37 GMT
- Title: Federated Learning for Cellular-connected UAVs: Radio Mapping and Path
Planning
- Authors: Behzad Khamidehi and Elvino S. Sousa
- Abstract summary: In this paper, we minimize the travel time of the UAVs, ensuring that a probabilistic connectivity constraint is satisfied.
Since the UAVs have different missions and fly over different areas, their collected data carry local information on the network's connectivity.
In the first step, the UAVs collaboratively build a global model of the outage probability in the environment.
In the second step, by using the global model obtained in the first step and rapidly-exploring random trees (RRTs), we propose an algorithm to optimize UAVs' paths.
- Score: 2.4366811507669124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To prolong the lifetime of the unmanned aerial vehicles (UAVs), the UAVs need
to fulfill their missions in the shortest possible time. In addition to this
requirement, in many applications, the UAVs require a reliable internet
connection during their flights. In this paper, we minimize the travel time of
the UAVs, ensuring that a probabilistic connectivity constraint is satisfied.
To solve this problem, we need a global model of the outage probability in the
environment. Since the UAVs have different missions and fly over different
areas, their collected data carry local information on the network's
connectivity. As a result, the UAVs can not rely on their own experiences to
build the global model. This issue affects the path planning of the UAVs. To
address this concern, we utilize a two-step approach. In the first step, by
using Federated Learning (FL), the UAVs collaboratively build a global model of
the outage probability in the environment. In the second step, by using the
global model obtained in the first step and rapidly-exploring random trees
(RRTs), we propose an algorithm to optimize UAVs' paths. Simulation results
show the effectiveness of this two-step approach for UAV networks.
Related papers
- How Far are Modern Trackers from UAV-Anti-UAV? A Million-Scale Benchmark and New Baseline [74.4054700050366]
Unmanned Aerial Vehicles (UAVs) offer wide-ranging applications but also pose significant safety and privacy violation risks.<n>Current Anti-UAV research primarily focuses on RGB, infrared (IR), or RGB-IR videos captured by fixed ground cameras.<n>We propose a new multi-modal visual tracking task termed UAV-Anti-UAV, which involves a pursuer UAV tracking a target adversarial UAV in the video stream.
arXiv Detail & Related papers (2025-12-08T10:19:54Z) - Trajectory Design for UAV-Based Low-Altitude Wireless Networks in Unknown Environments: A Digital Twin-Assisted TD3 Approach [62.11847362756054]
Unmanned aerial vehicles (UAVs) are emerging as key enablers for low-altitude wireless network (LAWN)<n>We propose a digital twin (DT)-assisted training and deployment framework.<n>In this framework, the UAV transmits integrated sensing and communication signals to provide communication services to ground users, while simultaneously collecting echoes that are uploaded to the DT server to progressively construct virtual environments (VEs)<n>These VEs accelerate model training and are continuously updated with real-time UAV sensing data during deployment, supporting decision-making and enhancing flight safety.
arXiv Detail & Related papers (2025-10-28T10:05:53Z) - Maximizing UAV Cellular Connectivity with Reinforcement Learning for BVLoS Path Planning [2.9248680865344343]
This paper presents a reinforcement learning (RL) based approach for path planning of cellular connected unmanned aerial vehicles (UAVs) operating beyond visual line of sight (BVLoS)<n>The proposed solution employs RL techniques to train an agent, using the quality of communication links between the UAV and base stations (BSs) as the reward function.<n>The RL algorithm efficiently identifies optimal paths, ensuring maximum connectivity with ground BSs to ensure safe and reliable BVLoS flight operation.
arXiv Detail & Related papers (2025-09-11T06:06:39Z) - AeroDuo: Aerial Duo for UAV-based Vision and Language Navigation [34.63571674882289]
Aerial Vision-and-Language Navigation (VLN) is an emerging task that enables Unmanned Aerial Vehicles (UAVs) to navigate outdoor environments using natural language instructions and visual cues.<n>We introduce a novel task called Dual-Altitude UAV Collaborative VLN (DuAl-VLN)<n>In this task, two UAVs operate at distinct altitudes: a high-altitude UAV responsible for broad environmental reasoning, and a low-altitude UAV tasked with precise navigation.
arXiv Detail & Related papers (2025-08-21T04:43:35Z) - Tiny Multi-Agent DRL for Twins Migration in UAV Metaverses: A Multi-Leader Multi-Follower Stackelberg Game Approach [57.15309977293297]
The synergy between Unmanned Aerial Vehicles (UAVs) and metaverses is giving rise to an emerging paradigm named UAV metaverses.
We propose a tiny machine learning-based Stackelberg game framework based on pruning techniques for efficient UT migration in UAV metaverses.
arXiv Detail & Related papers (2024-01-18T02:14:13Z) - Vision-Based UAV Self-Positioning in Low-Altitude Urban Environments [20.69412701553767]
Unmanned Aerial Vehicles (UAVs) rely on satellite systems for stable positioning.
In such situations, vision-based techniques can serve as an alternative, ensuring the self-positioning capability of UAVs.
This paper presents a new dataset, DenseUAV, which is the first publicly available dataset designed for the UAV self-positioning task.
arXiv Detail & Related papers (2022-01-23T07:18:55Z) - Solving reward-collecting problems with UAVs: a comparison of online
optimization and Q-learning [2.4251007104039006]
We study the problem of identifying a short path from a designated start to a goal, while collecting all rewards and avoiding adversaries that move randomly on the grid.
We present a comparison of three methods to solve this problem: namely we implement a Deep Q-Learning model, an $varepsilon$-greedy tabular Q-Learning model, and an online optimization framework.
Our experiments, designed using simple grid-world environments with random adversaries, showcase how these approaches work and compare them in terms of performance, accuracy, and computational time.
arXiv Detail & Related papers (2021-11-30T22:27:24Z) - A Multi-UAV System for Exploration and Target Finding in Cluttered and
GPS-Denied Environments [68.31522961125589]
We propose a framework for a team of UAVs to cooperatively explore and find a target in complex GPS-denied environments with obstacles.
The team of UAVs autonomously navigates, explores, detects, and finds the target in a cluttered environment with a known map.
Results indicate that the proposed multi-UAV system has improvements in terms of time-cost, the proportion of search area surveyed, as well as successful rates for search and rescue missions.
arXiv Detail & Related papers (2021-07-19T12:54:04Z) - 3D UAV Trajectory and Data Collection Optimisation via Deep
Reinforcement Learning [75.78929539923749]
Unmanned aerial vehicles (UAVs) are now beginning to be deployed for enhancing the network performance and coverage in wireless communication.
It is challenging to obtain an optimal resource allocation scheme for the UAV-assisted Internet of Things (IoT)
In this paper, we design a new UAV-assisted IoT systems relying on the shortest flight path of the UAVs while maximising the amount of data collected from IoT devices.
arXiv Detail & Related papers (2021-06-06T14:08:41Z) - Efficient UAV Trajectory-Planning using Economic Reinforcement Learning [65.91405908268662]
We introduce REPlanner, a novel reinforcement learning algorithm inspired by economic transactions to distribute tasks between UAVs.
We formulate the path planning problem as a multi-agent economic game, where agents can cooperate and compete for resources.
As the system computes task distributions via UAV cooperation, it is highly resilient to any change in the swarm size.
arXiv Detail & Related papers (2021-03-03T20:54:19Z) - Reinforcement Learning-based Joint Path and Energy Optimization of
Cellular-Connected Unmanned Aerial Vehicles [0.0]
We have used a reinforcement learning (RL) hierarchically to extend typical short-range path planners to consider battery recharge and solve the problem of UAVs in long missions.
The problem is simulated for the UAV that flies over a large area, and Q-learning algorithm could enable the UAV to find the optimal path and recharge policy.
arXiv Detail & Related papers (2020-11-27T14:16:55Z) - Simultaneous Navigation and Radio Mapping for Cellular-Connected UAV
with Deep Reinforcement Learning [46.55077580093577]
How to achieve ubiquitous 3D communication coverage for UAVs in the sky is a new challenge.
We propose a new coverage-aware navigation approach, which exploits the UAV's controllable mobility to design its navigation/trajectory.
We propose a new framework called simultaneous navigation and radio mapping (SNARM), where the UAV's signal measurement is used to train the deep Q network.
arXiv Detail & Related papers (2020-03-17T08:16:14Z) - Federated Learning in the Sky: Joint Power Allocation and Scheduling
with UAV Swarms [98.78553146823829]
Unmanned aerial vehicle (UAV) swarms must exploit machine learning (ML) in order to execute various tasks.
In this paper, a novel framework is proposed to implement distributed learning (FL) algorithms within a UAV swarm.
arXiv Detail & Related papers (2020-02-19T14:04:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.