MPTP: Motion-Planning-aware Task Planning for Navigation in Belief Space
- URL: http://arxiv.org/abs/2104.04696v1
- Date: Sat, 10 Apr 2021 06:52:16 GMT
- Title: MPTP: Motion-Planning-aware Task Planning for Navigation in Belief Space
- Authors: Antony Thomas, Fulvio Mastrogiovanni, Marco Baglietto
- Abstract summary: We present an integrated Task-Motion Planning framework for navigation in large-scale environments.
The framework is intended for motion planning under motion and sensing uncertainty.
- Score: 1.3535770763481902
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an integrated Task-Motion Planning (TMP) framework for navigation
in large-scale environments. Of late, TMP for manipulation has attracted
significant interest resulting in a proliferation of different approaches. In
contrast, TMP for navigation has received considerably less attention.
Autonomous robots operating in real-world complex scenarios require planning in
the discrete (task) space and the continuous (motion) space. In
knowledge-intensive domains, on the one hand, a robot has to reason at the
highest-level, for example, the objects to procure, the regions to navigate to
in order to acquire them; on the other hand, the feasibility of the respective
navigation tasks have to be checked at the execution level. This presents a
need for motion-planning-aware task planners. In this paper, we discuss a
probabilistically complete approach that leverages this task-motion interaction
for navigating in large knowledge-intensive domains, returning a plan that is
optimal at the task-level. The framework is intended for motion planning under
motion and sensing uncertainty, which is formally known as belief space
planning. The underlying methodology is validated in simulation, in an office
environment and its scalability is tested in the larger Willow Garage world. A
reasonable comparison with a work that is closest to our approach is also
provided. We also demonstrate the adaptability of our approach by considering a
building floor navigation domain. Finally, we also discuss the limitations of
our approach and put forward suggestions for improvements and future work.
Related papers
- A Meta-Engine Framework for Interleaved Task and Motion Planning using Topological Refinements [51.54559117314768]
Task And Motion Planning (TAMP) is the problem of finding a solution to an automated planning problem.
We propose a general and open-source framework for modeling and benchmarking TAMP problems.
We introduce an innovative meta-technique to solve TAMP problems involving moving agents and multiple task-state-dependent obstacles.
arXiv Detail & Related papers (2024-08-11T14:57:57Z) - Loc4Plan: Locating Before Planning for Outdoor Vision and Language Navigation [31.509686652011798]
Vision and Language Navigation (VLN) is a challenging task that requires agents to understand instructions and navigate to the destination in a visual environment.
Previous works mainly focus on grounding the natural language to the visual input, but neglecting the crucial role of the agent's spatial position information in the grounding process.
In this work, we introduce a novel framework, Locating be for Planning (Loc4Plan), designed to incorporate spatial perception for action planning in outdoor VLN tasks.
arXiv Detail & Related papers (2024-08-09T14:31:09Z) - TOP-Nav: Legged Navigation Integrating Terrain, Obstacle and Proprioception Estimation [5.484041860401147]
TOP-Nav is a novel legged navigation framework that integrates a comprehensive path planner with Terrain awareness, Obstacle avoidance and close-loop Proprioception.
We show that TOP-Nav achieves open-world navigation that the robot can handle terrains or disturbances beyond the distribution of prior knowledge.
arXiv Detail & Related papers (2024-04-23T17:42:45Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - ETPNav: Evolving Topological Planning for Vision-Language Navigation in
Continuous Environments [56.194988818341976]
Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments.
We propose ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments.
ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets.
arXiv Detail & Related papers (2023-04-06T13:07:17Z) - Long-HOT: A Modular Hierarchical Approach for Long-Horizon Object
Transport [83.06265788137443]
We address key challenges in long-horizon embodied exploration and navigation by proposing a new object transport task and a novel modular framework for temporally extended navigation.
Our first contribution is the design of a novel Long-HOT environment focused on deep exploration and long-horizon planning.
We propose a modular hierarchical transport policy (HTP) that builds a topological graph of the scene to perform exploration with the help of weighted frontiers.
arXiv Detail & Related papers (2022-10-28T05:30:49Z) - Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning [60.599223456298915]
We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
arXiv Detail & Related papers (2022-08-17T14:19:33Z) - Towards Multi-Robot Task-Motion Planning for Navigation in Belief Space [1.4824891788575418]
We present an integrated multi-robot task-motion planning framework for navigation in knowledge-intensive domains.
In particular, we consider a distributed multi-robot setting incorporating mutual observations between the robots.
The framework is intended for motion planning under motion and sensing uncertainty, which is formally known as belief space planning.
arXiv Detail & Related papers (2020-10-01T06:45:17Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.