Motion Planning by Reinforcement Learning for an Unmanned Aerial Vehicle
in Virtual Open Space with Static Obstacles
- URL: http://arxiv.org/abs/2009.11799v1
- Date: Thu, 24 Sep 2020 16:42:56 GMT
- Title: Motion Planning by Reinforcement Learning for an Unmanned Aerial Vehicle
in Virtual Open Space with Static Obstacles
- Authors: Sanghyun Kim, Jongmin Park, Jae-Kwan Yun, and Jiwon Seo
- Abstract summary: We applied reinforcement learning to perform motion planning for an unmanned aerial vehicle (UAV) in an open space with static obstacles.
As the reinforcement learning progressed, the mean reward and goal rate of the model were increased.
- Score: 3.5356468463540214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we applied reinforcement learning based on the proximal policy
optimization algorithm to perform motion planning for an unmanned aerial
vehicle (UAV) in an open space with static obstacles. The application of
reinforcement learning through a real UAV has several limitations such as time
and cost; thus, we used the Gazebo simulator to train a virtual quadrotor UAV
in a virtual environment. As the reinforcement learning progressed, the mean
reward and goal rate of the model were increased. Furthermore, the test of the
trained model shows that the UAV reaches the goal with an 81% goal rate using
the simple reward function suggested in this work.
Related papers
- Autonomous Decision Making for UAV Cooperative Pursuit-Evasion Game with Reinforcement Learning [50.33447711072726]
This paper proposes a deep reinforcement learning-based model for decision-making in multi-role UAV cooperative pursuit-evasion game.
The proposed method enables autonomous decision-making of the UAVs in pursuit-evasion game scenarios.
arXiv Detail & Related papers (2024-11-05T10:45:30Z) - SOAR: Self-supervision Optimized UAV Action Recognition with Efficient Object-Aware Pretraining [65.9024395309316]
We introduce a novel Self-supervised pretraining algorithm for aerial footage captured by Unmanned Aerial Vehicles (UAVs)
We incorporate human object knowledge throughout the pretraining process to enhance UAV video pretraining efficiency and downstream action recognition performance.
arXiv Detail & Related papers (2024-09-26T21:15:22Z) - UAV-enabled Collaborative Beamforming via Multi-Agent Deep Reinforcement Learning [79.16150966434299]
We formulate a UAV-enabled collaborative beamforming multi-objective optimization problem (UCBMOP) to maximize the transmission rate of the UVAA and minimize the energy consumption of all UAVs.
We use the heterogeneous-agent trust region policy optimization (HATRPO) as the basic framework, and then propose an improved HATRPO algorithm, namely HATRPO-UCB.
arXiv Detail & Related papers (2024-04-11T03:19:22Z) - Tiny Multi-Agent DRL for Twins Migration in UAV Metaverses: A Multi-Leader Multi-Follower Stackelberg Game Approach [57.15309977293297]
The synergy between Unmanned Aerial Vehicles (UAVs) and metaverses is giving rise to an emerging paradigm named UAV metaverses.
We propose a tiny machine learning-based Stackelberg game framework based on pruning techniques for efficient UT migration in UAV metaverses.
arXiv Detail & Related papers (2024-01-18T02:14:13Z) - Joint Path planning and Power Allocation of a Cellular-Connected UAV
using Apprenticeship Learning via Deep Inverse Reinforcement Learning [7.760962597460447]
This paper investigates an interference-aware joint path planning and power allocation mechanism for a cellular-connected unmanned aerial vehicle (UAV) in a sparse suburban environment.
The UAV aims to maximize its uplink throughput and minimize the level of interference to the ground user equipment (UEs) connected to the neighbor cellular BSs.
An apprenticeship learning method is utilized via inverse reinforcement learning (IRL) based on both Q-learning and deep reinforcement learning (DRL)
arXiv Detail & Related papers (2023-06-15T20:50:05Z) - UAV Obstacle Avoidance by Human-in-the-Loop Reinforcement in Arbitrary
3D Environment [17.531224704021273]
This paper focuses on the continuous control of the unmanned aerial vehicle (UAV) based on a deep reinforcement learning method.
We propose a deep reinforcement learning (DRL)-based method combined with human-in-the-loop, which allows the UAV to avoid obstacles automatically during flying.
arXiv Detail & Related papers (2023-04-07T01:44:05Z) - Self-Inspection Method of Unmanned Aerial Vehicles in Power Plants Using
Deep Q-Network Reinforcement Learning [0.0]
The research proposes a power plant inspection system incorporating UAV autonomous navigation and DQN reinforcement learning.
The trained model makes it more likely that the inspection strategy will be applied in practice by enabling the UAV to move around on its own in difficult environments.
arXiv Detail & Related papers (2023-03-16T00:58:50Z) - Reinforcement learning reward function in unmanned aerial vehicle
control tasks [0.0]
The reward function is based on the construction and estimation of the time of simplified trajectories to the target.
The effectiveness of the reward function was tested in a newly developed virtual environment.
arXiv Detail & Related papers (2022-03-20T10:32:44Z) - 3D UAV Trajectory and Data Collection Optimisation via Deep
Reinforcement Learning [75.78929539923749]
Unmanned aerial vehicles (UAVs) are now beginning to be deployed for enhancing the network performance and coverage in wireless communication.
It is challenging to obtain an optimal resource allocation scheme for the UAV-assisted Internet of Things (IoT)
In this paper, we design a new UAV-assisted IoT systems relying on the shortest flight path of the UAVs while maximising the amount of data collected from IoT devices.
arXiv Detail & Related papers (2021-06-06T14:08:41Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - Autonomous UAV Navigation: A DDPG-based Deep Reinforcement Learning
Approach [1.552282932199974]
We propose an autonomous UAV path planning framework using deep reinforcement learning approach.
The objective is to employ a self-trained UAV as a flying mobile unit to reach spatially distributed moving or static targets.
arXiv Detail & Related papers (2020-03-21T19:33:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.