FH-DRL: Exponential-Hyperbolic Frontier Heuristics with DRL for accelerated Exploration in Unknown Environments
- URL: http://arxiv.org/abs/2407.18892v2
- Date: Thu, 13 Feb 2025 02:46:13 GMT
- Title: FH-DRL: Exponential-Hyperbolic Frontier Heuristics with DRL for accelerated Exploration in Unknown Environments
- Authors: Seunghyeop Nam, Tuan Anh Nguyen, Eunmi Choi, Dugki Min,
- Abstract summary: This paper introduces FH-DRL, a novel framework that integrates a customizable function for frontier detection with a Twin Delayed DDPG (TD3) agent for continuous, high-speed local navigation.<n>We thoroughly evaluate FH-DRL across multiple simulated and real-world scenarios, demonstrating clear improvements in travel distance and completion time.<n>The results highlight FH-DRL as an efficient and general approach for frontier-based exploration in large or partially known environments.
- Score: 1.8749305679160366
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Autonomous robot exploration in large-scale or cluttered environments remains a central challenge in intelligent vehicle applications, where partial or absent prior maps constrain reliable navigation. This paper introduces FH-DRL, a novel framework that integrates a customizable heuristic function for frontier detection with a Twin Delayed DDPG (TD3) agent for continuous, high-speed local navigation. The proposed heuristic relies on an exponential-hyperbolic distance score, which balances immediate proximity against long-range exploration gains, and an occupancy-based stochastic measure, accounting for environmental openness and obstacle densities in real time. By ranking frontiers using these adaptive metrics, FH-DRL targets highly informative yet tractable waypoints, thereby minimizing redundant paths and total exploration time. We thoroughly evaluate FH-DRL across multiple simulated and real-world scenarios, demonstrating clear improvements in travel distance and completion time over frontier-only or purely DRL-based exploration. In structured corridor layouts and maze-like topologies, our architecture consistently outperforms standard methods such as Nearest Frontier, Cognet Frontier Exploration, and Goal Driven Autonomous Exploration. Real-world tests with a Turtlebot3 platform further confirm robust adaptation to previously unseen or cluttered indoor spaces. The results highlight FH-DRL as an efficient and generalizable approach for frontier-based exploration in large or partially known environments, offering a promising direction for various autonomous driving, industrial, and service robotics tasks.
Related papers
- Real-time Spatial-temporal Traversability Assessment via Feature-based Sparse Gaussian Process [14.428139979659395]
Terrain analysis is critical for the practical application of ground mobile robots in real-world tasks.
We propose a novel spatial-temporal traversability assessment method, which aims to enable autonomous robots to navigate through complex terrains.
We develop an autonomous navigation framework integrated with the traversability map and validate it with a differential driven vehicle in complex outdoor environments.
arXiv Detail & Related papers (2025-03-06T06:26:57Z) - Deep-Sea A*+: An Advanced Path Planning Method Integrating Enhanced A* and Dynamic Window Approach for Autonomous Underwater Vehicles [1.3807821497779342]
Extreme conditions in the deep-sea environment pose significant challenges for underwater operations.
We propose an advanced path planning methodology that integrates an improved A* algorithm with the Dynamic Window Approach (DWA)
Our proposed method surpasses the traditional A* algorithm in terms of path smoothness, obstacle avoidance, and real-time performance.
arXiv Detail & Related papers (2024-10-22T07:29:05Z) - D5RL: Diverse Datasets for Data-Driven Deep Reinforcement Learning [99.33607114541861]
We propose a new benchmark for offline RL that focuses on realistic simulations of robotic manipulation and locomotion environments.
Our proposed benchmark covers state-based and image-based domains, and supports both offline RL and online fine-tuning evaluation.
arXiv Detail & Related papers (2024-08-15T22:27:00Z) - Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - From Simulations to Reality: Enhancing Multi-Robot Exploration for Urban
Search and Rescue [46.377510400989536]
We present a novel hybrid algorithm for efficient multi-robot exploration in unknown environments with limited communication and no global positioning information.
We redefine the local best and global best positions to suit scenarios without continuous target information.
The presented work holds promise for enhancing multi-robot exploration in scenarios with limited information and communication capabilities.
arXiv Detail & Related papers (2023-11-28T17:05:25Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - Reinforcement Learning with Frontier-Based Exploration via Autonomous
Environment [0.0]
This research combines an existing Visual-Graph SLAM known as ExploreORB with reinforcement learning.
The proposed algorithm aims to improve the efficiency and accuracy of ExploreORB by optimizing the exploration process of frontiers to build a more accurate map.
arXiv Detail & Related papers (2023-07-14T12:19:46Z) - Confidence-Controlled Exploration: Efficient Sparse-Reward Policy Learning for Robot Navigation [72.24964965882783]
Reinforcement learning (RL) is a promising approach for robotic navigation, allowing robots to learn through trial and error.
Real-world robotic tasks often suffer from sparse rewards, leading to inefficient exploration and suboptimal policies.
We introduce Confidence-Controlled Exploration (CCE), a novel method that improves sample efficiency in RL-based robotic navigation without modifying the reward function.
arXiv Detail & Related papers (2023-06-09T18:45:15Z) - Learning-Augmented Model-Based Planning for Visual Exploration [8.870188183999854]
We propose a novel exploration approach using learning-augmented model-based planning.
Visual sensing and advances in semantic mapping of indoor scenes are exploited.
Our approach surpasses the greedy strategies by 2.1% and the RL-based exploration methods by 8.4% in terms of coverage.
arXiv Detail & Related papers (2022-11-15T04:53:35Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - AirDet: Few-Shot Detection without Fine-tuning for Autonomous
Exploration [16.032316550612336]
We present AirDet, which is free of fine-tuning by learning class relation with support images.
AirDet achieves comparable or even better results than the exhaustively finetuned methods, reaching up to 40-60% improvements on the baseline.
We present evaluation results on real-world exploration tests from the DARPA Subterranean Challenge.
arXiv Detail & Related papers (2021-12-03T06:41:07Z) - Successor Feature Landmarks for Long-Horizon Goal-Conditioned
Reinforcement Learning [54.378444600773875]
We introduce Successor Feature Landmarks (SFL), a framework for exploring large, high-dimensional environments.
SFL drives exploration by estimating state-novelty and enables high-level planning by abstracting the state-space as a non-parametric landmark-based graph.
We show in our experiments on MiniGrid and ViZDoom that SFL enables efficient exploration of large, high-dimensional state spaces.
arXiv Detail & Related papers (2021-11-18T18:36:05Z) - Focus on Impact: Indoor Exploration with Intrinsic Motivation [45.97756658635314]
In this work, we propose to train a model with a purely intrinsic reward signal to guide exploration.
We include a neural-based density model and replace the traditional count-based regularization with an estimated pseudo-count of previously visited states.
We also show that a robot equipped with the proposed approach seamlessly adapts to point-goal navigation and real-world deployment.
arXiv Detail & Related papers (2021-09-14T18:00:07Z) - Rule-Based Reinforcement Learning for Efficient Robot Navigation with
Space Reduction [8.279526727422288]
In this paper, we focus on efficient navigation with the reinforcement learning (RL) technique.
We employ a reduction rule to shrink the trajectory, which in turn effectively reduces the redundant exploration space.
Experiments conducted on real robot navigation problems in hex-grid environments demonstrate that RuRL can achieve improved navigation performance.
arXiv Detail & Related papers (2021-04-15T07:40:27Z) - Sparse Reward Exploration via Novelty Search and Emitters [55.41644538483948]
We introduce the SparsE Reward Exploration via Novelty and Emitters (SERENE) algorithm.
SERENE separates the search space exploration and reward exploitation into two alternating processes.
A meta-scheduler allocates a global computational budget by alternating between the two processes.
arXiv Detail & Related papers (2021-02-05T12:34:54Z) - Autonomous UAV Exploration of Dynamic Environments via Incremental
Sampling and Probabilistic Roadmap [0.3867363075280543]
We propose a novel dynamic exploration planner (DEP) for exploring unknown environments using incremental sampling and Probabilistic Roadmap (PRM)
Our method safely explores dynamic environments and outperforms the benchmark planners in terms of exploration time, path length, and computational time.
arXiv Detail & Related papers (2020-10-14T22:52:37Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.