Deep Reinforcement Learning for Localizability-Enhanced Navigation in
Dynamic Human Environments
- URL: http://arxiv.org/abs/2303.12354v1
- Date: Wed, 22 Mar 2023 07:44:35 GMT
- Title: Deep Reinforcement Learning for Localizability-Enhanced Navigation in
Dynamic Human Environments
- Authors: Yuan Chen, Quecheng Qiu, Xiangyu Liu, Guangda Chen, Shunyi Yao, Jie
Peng, Jianmin Ji and Yanyong Zhang
- Abstract summary: Reliable localization is crucial for autonomous robots to navigate efficiently and safely.
We propose a novel approach for localizability-enhanced navigation via deep reinforcement learning.
Our method exhibits significant improvements in lost rate and arrival rate when tested in previously unseen environments.
- Score: 16.25625435648576
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reliable localization is crucial for autonomous robots to navigate
efficiently and safely. Some navigation methods can plan paths with high
localizability (which describes the capability of acquiring reliable
localization). By following these paths, the robot can access the sensor
streams that facilitate more accurate location estimation results by the
localization algorithms. However, most of these methods require prior knowledge
and struggle to adapt to unseen scenarios or dynamic changes. To overcome these
limitations, we propose a novel approach for localizability-enhanced navigation
via deep reinforcement learning in dynamic human environments. Our proposed
planner automatically extracts geometric features from 2D laser data that are
helpful for localization. The planner learns to assign different importance to
the geometric features and encourages the robot to navigate through areas that
are helpful for laser localization. To facilitate the learning of the planner,
we suggest two techniques: (1) an augmented state representation that considers
the dynamic changes and the confidence of the localization results, which
provides more information and allows the robot to make better decisions, (2) a
reward metric that is capable to offer both sparse and dense feedback on
behaviors that affect localization accuracy. Our method exhibits significant
improvements in lost rate and arrival rate when tested in previously unseen
environments.
Related papers
- ActLoc: Learning to Localize on the Move via Active Viewpoint Selection [52.909507162638526]
ActLoc is an active viewpoint-aware planning framework for enhancing localization accuracy for general robot navigation tasks.<n>At its core, ActLoc employs a largescale trained attention-based model for viewpoint selection.<n>ActLoc achieves stateof-the-art results on single-viewpoint selection and generalizes effectively to fulltrajectory planning.
arXiv Detail & Related papers (2025-08-28T16:36:02Z) - ForesightNav: Learning Scene Imagination for Efficient Exploration [57.49417653636244]
We propose ForesightNav, a novel exploration strategy inspired by human imagination and reasoning.
Our approach equips robotic agents with the capability to predict contextual information, such as occupancy and semantic details, for unexplored regions.
We validate our imagination-based approach using the Structured3D dataset, demonstrating accurate occupancy prediction and superior performance in anticipating unseen scene geometry.
arXiv Detail & Related papers (2025-04-22T17:38:38Z) - Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information [68.10033984296247]
This paper explores the domain of active localization, emphasizing the importance of viewpoint selection to enhance localization accuracy.
Our contributions involve using a data-driven approach with a simple architecture designed for real-time operation, a self-supervised data training method, and the capability to consistently integrate our map into a planning framework tailored for real-world robotics applications.
arXiv Detail & Related papers (2024-07-22T12:32:09Z) - CorNav: Autonomous Agent with Self-Corrected Planning for Zero-Shot Vision-and-Language Navigation [73.78984332354636]
CorNav is a novel zero-shot framework for vision-and-language navigation.
It incorporates environmental feedback for refining future plans and adjusting its actions.
It consistently outperforms all baselines in a zero-shot multi-task setting.
arXiv Detail & Related papers (2023-06-17T11:44:04Z) - Robot path planning using deep reinforcement learning [0.0]
Reinforcement learning methods offer an alternative to map-free navigation tasks.
Deep reinforcement learning agents are implemented for both the obstacle avoidance and the goal-oriented navigation task.
An analysis of the changes in the behaviour and performance of the agents caused by modifications in the reward function is conducted.
arXiv Detail & Related papers (2023-02-17T20:08:59Z) - Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning [60.599223456298915]
We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
arXiv Detail & Related papers (2022-08-17T14:19:33Z) - ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints [94.60414567852536]
Long-range navigation requires both planning and reasoning about local traversability.
We propose a learning-based approach that integrates learning and planning.
ViKiNG can leverage its image-based learned controller and goal-directed to navigate to goals up to 3 kilometers away.
arXiv Detail & Related papers (2022-02-23T02:14:23Z) - XAI-N: Sensor-based Robot Navigation using Expert Policies and Decision
Trees [55.9643422180256]
We present a novel sensor-based learning navigation algorithm to compute a collision-free trajectory for a robot in dense and dynamic environments.
Our approach uses deep reinforcement learning-based expert policy that is trained using a sim2real paradigm.
We highlight the benefits of our algorithm in simulated environments and navigating a Clearpath Jackal robot among moving pedestrians.
arXiv Detail & Related papers (2021-04-22T01:33:10Z) - Indoor Point-to-Point Navigation with Deep Reinforcement Learning and
Ultra-wideband [1.6799377888527687]
Moving obstacles and non-line-of-sight occurrences can generate noisy and unreliable signals.
We show how a power-efficient point-to-point local planner, learnt with deep reinforcement learning (RL), can constitute a robust and resilient to noise short-range guidance system complete solution.
Our results show that the computational efficient end-to-end policy learnt in plain simulation, can provide a robust, scalable and at-the-edge low-cost navigation system solution.
arXiv Detail & Related papers (2020-11-18T12:30:36Z) - Autonomous Exploration Under Uncertainty via Deep Reinforcement Learning
on Graphs [5.043563227694137]
We consider an autonomous exploration problem in which a range-sensing mobile robot is tasked with accurately mapping the landmarks in an a priori unknown environment efficiently in real-time.
We propose a novel approach that uses graph neural networks (GNNs) in conjunction with deep reinforcement learning (DRL), enabling decision-making over graphs containing exploration information to predict a robot's optimal sensing action in belief space.
arXiv Detail & Related papers (2020-07-24T16:50:41Z) - Graph-based Proprioceptive Localization Using a Discrete Heading-Length
Feature Sequence Matching Approach [14.356113113268389]
Proprioceptive localization refers to a new class of robot egocentric localization methods.
These methods are naturally immune to bad weather, poor lighting conditions, or other extreme environmental conditions.
We provide a low cost fallback solution for localization under challenging environmental conditions.
arXiv Detail & Related papers (2020-05-27T23:10:15Z) - Learning to Move with Affordance Maps [57.198806691838364]
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent.
Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry.
We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
arXiv Detail & Related papers (2020-01-08T04:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.