Collaborative Reinforcement Learning Based Unmanned Aerial Vehicle (UAV)
Trajectory Design for 3D UAV Tracking
- URL: http://arxiv.org/abs/2401.12079v1
- Date: Mon, 22 Jan 2024 16:21:19 GMT
- Title: Collaborative Reinforcement Learning Based Unmanned Aerial Vehicle (UAV)
Trajectory Design for 3D UAV Tracking
- Authors: Yujiao Zhu, Mingzhe Chen, Sihua Wang, Ye Hu, Yuchen Liu, and
Changchuan Yin
- Abstract summary: The problem of using one active unmanned aerial vehicle (UAV) and four passive UAVs to localize a 3D target UAV in real time is investigated.
A Z function decomposition based reinforcement learning (ZD-RL) method is proposed to solve this problem.
- Score: 21.520344500526516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, the problem of using one active unmanned aerial vehicle (UAV)
and four passive UAVs to localize a 3D target UAV in real time is investigated.
In the considered model, each passive UAV receives reflection signals from the
target UAV, which are initially transmitted by the active UAV. The received
reflection signals allow each passive UAV to estimate the signal transmission
distance which will be transmitted to a base station (BS) for the estimation of
the position of the target UAV. Due to the movement of the target UAV, each
active/passive UAV must optimize its trajectory to continuously localize the
target UAV. Meanwhile, since the accuracy of the distance estimation depends on
the signal-to-noise ratio of the transmission signals, the active UAV must
optimize its transmit power. This problem is formulated as an optimization
problem whose goal is to jointly optimize the transmit power of the active UAV
and trajectories of both active and passive UAVs so as to maximize the target
UAV positioning accuracy. To solve this problem, a Z function decomposition
based reinforcement learning (ZD-RL) method is proposed. Compared to value
function decomposition based RL (VD-RL), the proposed method can find the
probability distribution of the sum of future rewards to accurately estimate
the expected value of the sum of future rewards thus finding better transmit
power of the active UAV and trajectories for both active and passive UAVs and
improving target UAV positioning accuracy. Simulation results show that the
proposed ZD-RL method can reduce the positioning errors by up to 39.4% and
64.6%, compared to VD-RL and independent deep RL methods, respectively.
Related papers
- UAV-enabled Collaborative Beamforming via Multi-Agent Deep Reinforcement Learning [79.16150966434299]
We formulate a UAV-enabled collaborative beamforming multi-objective optimization problem (UCBMOP) to maximize the transmission rate of the UVAA and minimize the energy consumption of all UAVs.
We use the heterogeneous-agent trust region policy optimization (HATRPO) as the basic framework, and then propose an improved HATRPO algorithm, namely HATRPO-UCB.
arXiv Detail & Related papers (2024-04-11T03:19:22Z) - Anti-Jamming Path Planning Using GCN for Multi-UAV [0.0]
The effectiveness of UAV swarms can be severely compromised by jamming technology.
A novel approach, where UAV swarms leverage collective intelligence to predict jamming areas, is proposed.
A multi-agent control algorithm is then employed to disperse the UAV swarm, avoid jamming, and regroup upon reaching the target.
arXiv Detail & Related papers (2024-03-13T07:28:05Z) - UAV Swarm-enabled Collaborative Secure Relay Communications with
Time-domain Colluding Eavesdropper [115.56455278813756]
Unmanned aerial vehicles (UAV) as aerial relays are practically appealing for assisting Internet Things (IoT) network.
In this work, we aim to utilize the UAV to assist secure communication between the UAV base station and terminal terminal devices.
arXiv Detail & Related papers (2023-10-03T11:47:01Z) - Integrated Sensing, Computation, and Communication for UAV-assisted
Federated Edge Learning [52.7230652428711]
Federated edge learning (FEEL) enables privacy-preserving model training through periodic communication between edge devices and the server.
Unmanned Aerial Vehicle (UAV)mounted edge devices are particularly advantageous for FEEL due to their flexibility and mobility in efficient data collection.
arXiv Detail & Related papers (2023-06-05T16:01:33Z) - UAV Obstacle Avoidance by Human-in-the-Loop Reinforcement in Arbitrary
3D Environment [17.531224704021273]
This paper focuses on the continuous control of the unmanned aerial vehicle (UAV) based on a deep reinforcement learning method.
We propose a deep reinforcement learning (DRL)-based method combined with human-in-the-loop, which allows the UAV to avoid obstacles automatically during flying.
arXiv Detail & Related papers (2023-04-07T01:44:05Z) - Responsive Regulation of Dynamic UAV Communication Networks Based on
Deep Reinforcement Learning [16.78151396672782]
We develop an optimal UAV control policy which is capable of identifying the upcoming change in the UAV lineup and relocating the UAVs ahead of the change.
Specifically, a deep reinforcement learning (DRL)-based UAV control framework is developed to maximize the accumulated user satisfaction (US) score for a given time horizon.
In addition, to handle the continuous state and action space, deep deterministic policy gradient (DDPG) algorithm, which is an actor-critic based DRL, is exploited.
arXiv Detail & Related papers (2021-08-25T02:04:13Z) - 3D UAV Trajectory and Data Collection Optimisation via Deep
Reinforcement Learning [75.78929539923749]
Unmanned aerial vehicles (UAVs) are now beginning to be deployed for enhancing the network performance and coverage in wireless communication.
It is challenging to obtain an optimal resource allocation scheme for the UAV-assisted Internet of Things (IoT)
In this paper, we design a new UAV-assisted IoT systems relying on the shortest flight path of the UAVs while maximising the amount of data collected from IoT devices.
arXiv Detail & Related papers (2021-06-06T14:08:41Z) - Anti-UAV: A Large Multi-Modal Benchmark for UAV Tracking [59.06167734555191]
Unmanned Aerial Vehicle (UAV) offers lots of applications in both commerce and recreation.
We consider the task of tracking UAVs, providing rich information such as location and trajectory.
We propose a dataset, Anti-UAV, with more than 300 video pairs containing over 580k manually annotated bounding boxes.
arXiv Detail & Related papers (2021-01-21T07:00:15Z) - Secure communication between UAVs using a method based on smart agents
in unmanned aerial vehicles [1.2691047660244335]
Unmanned aerial vehicles (UAVs) can be deployed to monitor very large areas without the need for network infrastructure.
Such communication poses security challenges due to its dynamic topology.
The proposed method uses two phases to counter malicious UAV attacks.
arXiv Detail & Related papers (2020-11-03T10:33:39Z) - SREC: Proactive Self-Remedy of Energy-Constrained UAV-Based Networks via
Deep Reinforcement Learning [11.065500588538997]
Energy-aware control for multiple unmanned aerial vehicles (UAVs) is one of the major research interests in UAV based networking.
We study proactive self-remedy of energy-constrained UAV networks when one or more UAVs are short of energy and about to quit for charging.
We propose an energy-aware optimal UAV control policy which proactively relocates the UAVs when any UAV is about to quit the network.
arXiv Detail & Related papers (2020-09-17T20:51:17Z) - Federated Learning in the Sky: Joint Power Allocation and Scheduling
with UAV Swarms [98.78553146823829]
Unmanned aerial vehicle (UAV) swarms must exploit machine learning (ML) in order to execute various tasks.
In this paper, a novel framework is proposed to implement distributed learning (FL) algorithms within a UAV swarm.
arXiv Detail & Related papers (2020-02-19T14:04:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.