Autonomous UAV Exploration of Dynamic Environments via Incremental
Sampling and Probabilistic Roadmap
- URL: http://arxiv.org/abs/2010.07429v3
- Date: Sun, 21 Mar 2021 03:04:36 GMT
- Title: Autonomous UAV Exploration of Dynamic Environments via Incremental
Sampling and Probabilistic Roadmap
- Authors: Zhefan Xu, Di Deng, Kenji Shimada
- Abstract summary: We propose a novel dynamic exploration planner (DEP) for exploring unknown environments using incremental sampling and Probabilistic Roadmap (PRM)
Our method safely explores dynamic environments and outperforms the benchmark planners in terms of exploration time, path length, and computational time.
- Score: 0.3867363075280543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous exploration requires robots to generate informative trajectories
iteratively. Although sampling-based methods are highly efficient in unmanned
aerial vehicle exploration, many of these methods do not effectively utilize
the sampled information from the previous planning iterations, leading to
redundant computation and longer exploration time. Also, few have explicitly
shown their exploration ability in dynamic environments even though they can
run real-time. To overcome these limitations, we propose a novel dynamic
exploration planner (DEP) for exploring unknown environments using incremental
sampling and Probabilistic Roadmap (PRM). In our sampling strategy, nodes are
added incrementally and distributed evenly in the explored region, yielding the
best viewpoints. To further shortening exploration time and ensuring safety,
our planner optimizes paths locally and refine them based on the Euclidean
Signed Distance Function (ESDF) map. Meanwhile, as the multi-query planner, PRM
allows the proposed planner to quickly search alternative paths to avoid
dynamic obstacles for safe exploration. Simulation experiments show that our
method safely explores dynamic environments and outperforms the benchmark
planners in terms of exploration time, path length, and computational time.
Related papers
- OTO Planner: An Efficient Only Travelling Once Exploration Planner for Complex and Unknown Environments [6.128246045267511]
"Only Travelling Once Planner" is an efficient exploration planner that reduces repeated paths in complex environments.
It includes fast frontier updating, viewpoint evaluation and viewpoint refinement.
It reduces the exploration time and movement distance by 10%-20% and improves the speed of frontier detection by 6-9 times.
arXiv Detail & Related papers (2024-06-11T14:23:48Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - Multi-Robot Path Planning Combining Heuristics and Multi-Agent
Reinforcement Learning [0.0]
In the movement process, robots need to avoid collisions with other moving robots while minimizing their travel distance.
Previous methods for this problem either continuously replan paths using search methods to avoid conflicts or choose appropriate collision avoidance strategies based on learning approaches.
We propose a path planning method, MAPPOHR, which combines a search, empirical rules, and multi-agent reinforcement learning.
arXiv Detail & Related papers (2023-06-02T05:07:37Z) - Exploration via Planning for Information about the Optimal Trajectory [67.33886176127578]
We develop a method that allows us to plan for exploration while taking the task and the current knowledge into account.
We demonstrate that our method learns strong policies with 2x fewer samples than strong exploration baselines.
arXiv Detail & Related papers (2022-10-06T20:28:55Z) - Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning [60.599223456298915]
We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
arXiv Detail & Related papers (2022-08-17T14:19:33Z) - Adaptive Informative Path Planning Using Deep Reinforcement Learning for
UAV-based Active Sensing [2.6519061087638014]
We propose a new approach for informative path planning based on deep reinforcement learning (RL)
Our method combines Monte Carlo tree search with an offline-learned neural network predicting informative sensing actions.
By deploying the trained network during a mission, our method enables sample-efficient online replanning on physical platforms with limited computational resources.
arXiv Detail & Related papers (2021-09-28T09:00:55Z) - A Multi-UAV System for Exploration and Target Finding in Cluttered and
GPS-Denied Environments [68.31522961125589]
We propose a framework for a team of UAVs to cooperatively explore and find a target in complex GPS-denied environments with obstacles.
The team of UAVs autonomously navigates, explores, detects, and finds the target in a cluttered environment with a known map.
Results indicate that the proposed multi-UAV system has improvements in terms of time-cost, the proportion of search area surveyed, as well as successful rates for search and rescue missions.
arXiv Detail & Related papers (2021-07-19T12:54:04Z) - MADE: Exploration via Maximizing Deviation from Explored Regions [48.49228309729319]
In online reinforcement learning (RL), efficient exploration remains challenging in high-dimensional environments with sparse rewards.
We propose a new exploration approach via textitmaximizing the deviation of the occupancy of the next policy from the explored regions.
Our approach significantly improves sample efficiency over state-of-the-art methods.
arXiv Detail & Related papers (2021-06-18T17:57:00Z) - Deep Reinforcement Learning for Adaptive Exploration of Unknown
Environments [6.90777229452271]
We develop an adaptive exploration approach to trade off between exploration and exploitation in one single step for UAVs.
The proposed approach uses a map segmentation technique to decompose the environment map into smaller, tractable maps.
The results demonstrate that our proposed approach is capable of navigating through randomly generated environments and covering more AoI in less time steps compared to the baselines.
arXiv Detail & Related papers (2021-05-04T16:29:44Z) - Path Planning Followed by Kinodynamic Smoothing for Multirotor Aerial
Vehicles (MAVs) [61.94975011711275]
We propose a geometrically based motion planning technique textquotedblleft RRT*textquotedblright; for this purpose.
In the proposed technique, we modified original RRT* introducing an adaptive search space and a steering function.
We have tested the proposed technique in various simulated environments.
arXiv Detail & Related papers (2020-08-29T09:55:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.