Sequential Manipulation Planning on Scene Graph
- URL: http://arxiv.org/abs/2207.04364v3
- Date: Thu, 14 Jul 2022 09:46:53 GMT
- Title: Sequential Manipulation Planning on Scene Graph
- Authors: Ziyuan Jiao, Yida Niu, Zeyu Zhang, Song-Chun Zhu, Yixin Zhu, Hangxin
Liu
- Abstract summary: We devise a 3D scene graph representation, contact graph+ (cg+), for efficient sequential task planning.
Goal configurations, naturally specified on contact graphs, can be produced by a genetic algorithm with an optimization method.
A task plan is then succinct by computing the Graph Editing Distance (GED) between the initial contact graphs and the goal configurations, which generates graph edit operations corresponding to possible robot actions.
- Score: 90.28117916077073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We devise a 3D scene graph representation, contact graph+ (cg+), for
efficient sequential task planning. Augmented with predicate-like attributes,
this contact graph-based representation abstracts scene layouts with succinct
geometric information and valid robot-scene interactions. Goal configurations,
naturally specified on contact graphs, can be produced by a genetic algorithm
with a stochastic optimization method. A task plan is then initialized by
computing the Graph Editing Distance (GED) between the initial contact graphs
and the goal configurations, which generates graph edit operations
corresponding to possible robot actions. We finalize the task plan by imposing
constraints to regulate the temporal feasibility of graph edit operations,
ensuring valid task and motion correspondences. In a series of simulations and
experiments, robots successfully complete complex sequential object
rearrangement tasks that are difficult to specify using conventional planning
language like Planning Domain Definition Language (PDDL), demonstrating the
high feasibility and potential of robot sequential task planning on contact
graph.
Related papers
- Dynamic and Textual Graph Generation Via Large-Scale LLM-based Agent Simulation [70.60461609393779]
GraphAgent-Generator (GAG) is a novel simulation-based framework for dynamic graph generation.
Our framework effectively replicates seven macro-level structural characteristics in established network science theories.
It supports generating graphs with up to nearly 100,000 nodes or 10 million edges, with a minimum speed-up of 90.4%.
arXiv Detail & Related papers (2024-10-13T12:57:08Z) - Exploring Task Unification in Graph Representation Learning via Generative Approach [12.983429541410617]
Graphs are ubiquitous in real-world scenarios and encompass a diverse range of tasks, from node-, edge-, and graph-level tasks to transfer learning.
Recent endeavors aim to design a unified framework capable of generalizing across multiple graph tasks.
Among these, graph autoencoders (GAEs) have demonstrated their potential in effectively addressing various graph tasks.
We propose GA2E, a unified adversarially masked autoencoder capable of addressing the above challenges seamlessly.
arXiv Detail & Related papers (2024-03-21T12:14:02Z) - Unsupervised Task Graph Generation from Instructional Video Transcripts [53.54435048879365]
We consider a setting where text transcripts of instructional videos performing a real-world activity are provided.
The goal is to identify the key steps relevant to the task as well as the dependency relationship between these key steps.
We propose a novel task graph generation approach that combines the reasoning capabilities of instruction-tuned language models along with clustering and ranking components.
arXiv Detail & Related papers (2023-02-17T22:50:08Z) - Learning to Search in Task and Motion Planning with Streams [20.003445874753233]
Task and motion planning problems in robotics combine symbolic planning over discrete task variables with motion optimization over continuous state and action variables.
We propose a geometrically informed symbolic planner that expands the set of objects and facts in a best-first manner.
We apply our algorithm on a 7DOF robotic arm in block-stacking manipulation tasks.
arXiv Detail & Related papers (2021-11-25T15:58:31Z) - Unconditional Scene Graph Generation [72.53624470737712]
We develop a deep auto-regressive model called SceneGraphGen which can learn the probability distribution over labelled and directed graphs.
We show that the scene graphs generated by SceneGraphGen are diverse and follow the semantic patterns of real-world scenes.
arXiv Detail & Related papers (2021-08-12T17:57:16Z) - A Task-Motion Planning Framework Using Iteratively Deepened AND/OR Graph
Networks [1.3535770763481902]
We present an approach for Task-Motion Planning (TMP) using Iterative Deepened AND/OR Graph Networks (TMP-IDAN)
TMP-IDAN uses an AND/OR graph network based novel abstraction for compactly representing the task-level states and actions.
We validate our approach and evaluate its capabilities using a Baxter robot and a state-of-the-art robotics simulator.
arXiv Detail & Related papers (2021-04-04T07:06:52Z) - GraphOpt: Learning Optimization Models of Graph Formation [72.75384705298303]
We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
arXiv Detail & Related papers (2020-07-07T16:51:39Z) - Hallucinative Topological Memory for Zero-Shot Visual Planning [86.20780756832502]
In visual planning (VP), an agent learns to plan goal-directed behavior from observations of a dynamical system obtained offline.
Most previous works on VP approached the problem by planning in a learned latent space, resulting in low-quality visual plans.
Here, we propose a simple VP method that plans directly in image space and displays competitive performance.
arXiv Detail & Related papers (2020-02-27T18:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.