SHANGUS: Deep Reinforcement Learning Meets Heuristic Optimization for Speedy Frontier-Based Exploration of Autonomous Vehicles in Unknown Spaces
- URL: http://arxiv.org/abs/2407.18892v1
- Date: Fri, 26 Jul 2024 17:42:18 GMT
- Title: SHANGUS: Deep Reinforcement Learning Meets Heuristic Optimization for Speedy Frontier-Based Exploration of Autonomous Vehicles in Unknown Spaces
- Authors: Seunghyeop Nam, Tuan Anh Nguyen, Eunmi Choi, Dugki Min,
- Abstract summary: SHANGUS is a framework combining Deep Reinforcement Learning (DRL) with optimization to improve frontier-based exploration efficiency.
The framework is suitable for real-time autonomous navigation in fields such as industrial automation, autonomous driving, household robotics, and space exploration.
- Score: 1.8749305679160366
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper introduces SHANGUS, an advanced framework combining Deep Reinforcement Learning (DRL) with heuristic optimization to improve frontier-based exploration efficiency in unknown environments, particularly for intelligent vehicles in autonomous air services, search and rescue operations, and space exploration robotics. SHANGUS harnesses DRL's adaptability and heuristic prioritization, markedly enhancing exploration efficiency, reducing completion time, and minimizing travel distance. The strategy involves a frontier selection node to identify unexplored areas and a DRL navigation node using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm for robust path planning and dynamic obstacle avoidance. Extensive experiments in ROS2 and Gazebo simulation environments show SHANGUS surpasses representative traditional methods like the Nearest Frontier (NF), Novel Frontier-Based Exploration Algorithm (CFE), and Goal-Driven Autonomous Exploration (GDAE) algorithms, especially in complex scenarios, excelling in completion time, travel distance, and exploration rate. This scalable solution is suitable for real-time autonomous navigation in fields such as industrial automation, autonomous driving, household robotics, and space exploration. Future research will integrate additional sensory inputs and refine heuristic functions to further boost SHANGUS's efficiency and robustness.
Related papers
- Real-time Spatial-temporal Traversability Assessment via Feature-based Sparse Gaussian Process [14.428139979659395]
Terrain analysis is critical for the practical application of ground mobile robots in real-world tasks.
We propose a novel spatial-temporal traversability assessment method, which aims to enable autonomous robots to navigate through complex terrains.
We develop an autonomous navigation framework integrated with the traversability map and validate it with a differential driven vehicle in complex outdoor environments.
arXiv Detail & Related papers (2025-03-06T06:26:57Z) - Deep-Sea A*+: An Advanced Path Planning Method Integrating Enhanced A* and Dynamic Window Approach for Autonomous Underwater Vehicles [1.3807821497779342]
Extreme conditions in the deep-sea environment pose significant challenges for underwater operations.
We propose an advanced path planning methodology that integrates an improved A* algorithm with the Dynamic Window Approach (DWA)
Our proposed method surpasses the traditional A* algorithm in terms of path smoothness, obstacle avoidance, and real-time performance.
arXiv Detail & Related papers (2024-10-22T07:29:05Z) - D5RL: Diverse Datasets for Data-Driven Deep Reinforcement Learning [99.33607114541861]
We propose a new benchmark for offline RL that focuses on realistic simulations of robotic manipulation and locomotion environments.
Our proposed benchmark covers state-based and image-based domains, and supports both offline RL and online fine-tuning evaluation.
arXiv Detail & Related papers (2024-08-15T22:27:00Z) - Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - From Simulations to Reality: Enhancing Multi-Robot Exploration for Urban
Search and Rescue [46.377510400989536]
We present a novel hybrid algorithm for efficient multi-robot exploration in unknown environments with limited communication and no global positioning information.
We redefine the local best and global best positions to suit scenarios without continuous target information.
The presented work holds promise for enhancing multi-robot exploration in scenarios with limited information and communication capabilities.
arXiv Detail & Related papers (2023-11-28T17:05:25Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - Reinforcement Learning with Frontier-Based Exploration via Autonomous
Environment [0.0]
This research combines an existing Visual-Graph SLAM known as ExploreORB with reinforcement learning.
The proposed algorithm aims to improve the efficiency and accuracy of ExploreORB by optimizing the exploration process of frontiers to build a more accurate map.
arXiv Detail & Related papers (2023-07-14T12:19:46Z) - Confidence-Controlled Exploration: Efficient Sparse-Reward Policy Learning for Robot Navigation [72.24964965882783]
Reinforcement learning (RL) is a promising approach for robotic navigation, allowing robots to learn through trial and error.
Real-world robotic tasks often suffer from sparse rewards, leading to inefficient exploration and suboptimal policies.
We introduce Confidence-Controlled Exploration (CCE), a novel method that improves sample efficiency in RL-based robotic navigation without modifying the reward function.
arXiv Detail & Related papers (2023-06-09T18:45:15Z) - Learning-Augmented Model-Based Planning for Visual Exploration [8.870188183999854]
We propose a novel exploration approach using learning-augmented model-based planning.
Visual sensing and advances in semantic mapping of indoor scenes are exploited.
Our approach surpasses the greedy strategies by 2.1% and the RL-based exploration methods by 8.4% in terms of coverage.
arXiv Detail & Related papers (2022-11-15T04:53:35Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - AirDet: Few-Shot Detection without Fine-tuning for Autonomous
Exploration [16.032316550612336]
We present AirDet, which is free of fine-tuning by learning class relation with support images.
AirDet achieves comparable or even better results than the exhaustively finetuned methods, reaching up to 40-60% improvements on the baseline.
We present evaluation results on real-world exploration tests from the DARPA Subterranean Challenge.
arXiv Detail & Related papers (2021-12-03T06:41:07Z) - Successor Feature Landmarks for Long-Horizon Goal-Conditioned
Reinforcement Learning [54.378444600773875]
We introduce Successor Feature Landmarks (SFL), a framework for exploring large, high-dimensional environments.
SFL drives exploration by estimating state-novelty and enables high-level planning by abstracting the state-space as a non-parametric landmark-based graph.
We show in our experiments on MiniGrid and ViZDoom that SFL enables efficient exploration of large, high-dimensional state spaces.
arXiv Detail & Related papers (2021-11-18T18:36:05Z) - Focus on Impact: Indoor Exploration with Intrinsic Motivation [45.97756658635314]
In this work, we propose to train a model with a purely intrinsic reward signal to guide exploration.
We include a neural-based density model and replace the traditional count-based regularization with an estimated pseudo-count of previously visited states.
We also show that a robot equipped with the proposed approach seamlessly adapts to point-goal navigation and real-world deployment.
arXiv Detail & Related papers (2021-09-14T18:00:07Z) - Rule-Based Reinforcement Learning for Efficient Robot Navigation with
Space Reduction [8.279526727422288]
In this paper, we focus on efficient navigation with the reinforcement learning (RL) technique.
We employ a reduction rule to shrink the trajectory, which in turn effectively reduces the redundant exploration space.
Experiments conducted on real robot navigation problems in hex-grid environments demonstrate that RuRL can achieve improved navigation performance.
arXiv Detail & Related papers (2021-04-15T07:40:27Z) - Sparse Reward Exploration via Novelty Search and Emitters [55.41644538483948]
We introduce the SparsE Reward Exploration via Novelty and Emitters (SERENE) algorithm.
SERENE separates the search space exploration and reward exploitation into two alternating processes.
A meta-scheduler allocates a global computational budget by alternating between the two processes.
arXiv Detail & Related papers (2021-02-05T12:34:54Z) - Autonomous UAV Exploration of Dynamic Environments via Incremental
Sampling and Probabilistic Roadmap [0.3867363075280543]
We propose a novel dynamic exploration planner (DEP) for exploring unknown environments using incremental sampling and Probabilistic Roadmap (PRM)
Our method safely explores dynamic environments and outperforms the benchmark planners in terms of exploration time, path length, and computational time.
arXiv Detail & Related papers (2020-10-14T22:52:37Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.