Efficient Task Planning for Mobile Manipulation: a Virtual Kinematic
Chain Perspective
- URL: http://arxiv.org/abs/2108.01259v1
- Date: Tue, 3 Aug 2021 02:49:18 GMT
- Title: Efficient Task Planning for Mobile Manipulation: a Virtual Kinematic
Chain Perspective
- Authors: Ziyuan Jiao, Zeyu Zhang, Weiqi Wang, David Han, Song-Chun Zhu, Yixin
Zhu, Hangxin Liu
- Abstract summary: We present a Virtual Kinematic Chain perspective to improve task planning efficacy for mobile manipulation.
By consolidating the kinematics of the mobile base, the arm, and the object being manipulated collectively as a whole, this novel VKC perspective naturally defines abstract actions.
In experiments, we implement a task planner using Domain Planning Definition Language (PDDL) with VKC.
- Score: 88.25410628450453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a Virtual Kinematic Chain (VKC) perspective, a simple yet
effective method, to improve task planning efficacy for mobile manipulation. By
consolidating the kinematics of the mobile base, the arm, and the object being
manipulated collectively as a whole, this novel VKC perspective naturally
defines abstract actions and eliminates unnecessary predicates in describing
intermediate poses. As a result, these advantages simplify the design of the
planning domain and significantly reduce the search space and branching factors
in solving planning problems. In experiments, we implement a task planner using
Planning Domain Definition Language (PDDL) with VKC. Compared with conventional
domain definition, our VKC-based domain definition is more efficient in both
planning time and memory. In addition, abstract actions perform better in
producing feasible motion plans and trajectories. We further scale up the
VKC-based task planner in complex mobile manipulation tasks. Taken together,
these results demonstrate that task planning using VKC for mobile manipulation
is not only natural and effective but also introduces new capabilities.
Related papers
- LLM3:Large Language Model-based Task and Motion Planning with Motion Failure Reasoning [78.2390460278551]
Conventional Task and Motion Planning (TAMP) approaches rely on manually crafted interfaces connecting symbolic task planning with continuous motion generation.
Here, we present LLM3, a novel Large Language Model (LLM)-based TAMP framework featuring a domain-independent interface.
Specifically, we leverage the powerful reasoning and planning capabilities of pre-trained LLMs to propose symbolic action sequences and select continuous action parameters for motion planning.
arXiv Detail & Related papers (2024-03-18T08:03:47Z) - Unified Task and Motion Planning using Object-centric Abstractions of
Motion Constraints [56.283944756315066]
We propose an alternative TAMP approach that unifies task and motion planning into a single search.
Our approach is based on an object-centric abstraction of motion constraints that permits leveraging the computational efficiency of off-the-shelf AI search to yield physically feasible plans.
arXiv Detail & Related papers (2023-12-29T14:00:20Z) - Leveraging Scene Embeddings for Gradient-Based Motion Planning in Latent
Space [24.95320093765214]
AMP-LS is able to plan in novel, complex scenes while outperforming traditional planning baselines in terms of speed by an order of magnitude.
We show that the resulting system is fast enough to enable closed-loop planning in real-world dynamic scenes.
arXiv Detail & Related papers (2023-03-06T18:49:39Z) - Consolidating Kinematic Models to Promote Coordinated Mobile
Manipulations [96.03270112422514]
We construct a Virtual Kinematic Chain (VKC) that consolidates the kinematics of the mobile base, the arm, and the object to be manipulated in mobile manipulations.
A mobile manipulation task is represented by altering the state of the constructed VKC, which can be converted to a motion planning problem.
arXiv Detail & Related papers (2021-08-03T02:59:41Z) - Latent Space Roadmap for Visual Action Planning of Deformable and Rigid
Object Manipulation [74.88956115580388]
Planning is performed in a low-dimensional latent state space that embeds images.
Our framework consists of two main components: a Visual Foresight Module (VFM) that generates a visual plan as a sequence of images, and an Action Proposal Network (APN) that predicts the actions between them.
arXiv Detail & Related papers (2020-03-19T18:43:26Z) - Goal-Conditioned End-to-End Visuomotor Control for Versatile Skill
Primitives [89.34229413345541]
We propose a conditioning scheme which avoids pitfalls by learning the controller and its conditioning in an end-to-end manner.
Our model predicts complex action sequences based directly on a dynamic image representation of the robot motion.
We report significant improvements in task success over representative MPC and IL baselines.
arXiv Detail & Related papers (2020-03-19T15:04:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.