Planning with Learned Object Importance in Large Problem Instances using
Graph Neural Networks
- URL: http://arxiv.org/abs/2009.05613v2
- Date: Tue, 8 Dec 2020 19:58:17 GMT
- Title: Planning with Learned Object Importance in Large Problem Instances using
Graph Neural Networks
- Authors: Tom Silver, Rohan Chitnis, Aidan Curtis, Joshua Tenenbaum, Tomas
Lozano-Perez, Leslie Pack Kaelbling
- Abstract summary: Real-world planning problems often involve hundreds or even thousands of objects.
We propose a graph neural network architecture for predicting object importance in a single inference pass.
Our approach treats the planner and transition model as black boxes, and can be used with any off-the-shelf planner.
- Score: 28.488201307961624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-world planning problems often involve hundreds or even thousands of
objects, straining the limits of modern planners. In this work, we address this
challenge by learning to predict a small set of objects that, taken together,
would be sufficient for finding a plan. We propose a graph neural network
architecture for predicting object importance in a single inference pass, thus
incurring little overhead while greatly reducing the number of objects that
must be considered by the planner. Our approach treats the planner and
transition model as black boxes, and can be used with any off-the-shelf
planner. Empirically, across classical planning, probabilistic planning, and
robotic task and motion planning, we find that our method results in planning
that is significantly faster than several baselines, including other partial
grounding strategies and lifted planners. We conclude that learning to predict
a sufficient set of objects for a planning problem is a simple, powerful, and
general mechanism for planning in large instances. Video:
https://youtu.be/FWsVJc2fvCE Code: https://git.io/JIsqX
Related papers
- Propose, Assess, Search: Harnessing LLMs for Goal-Oriented Planning in Instructional Videos [48.15438373870542]
VidAssist is an integrated framework designed for zero/few-shot goal-oriented planning in instructional videos.
It employs a breadth-first search algorithm for optimal plan generation.
Experiments demonstrate that VidAssist offers a unified framework for different goal-oriented planning setups.
arXiv Detail & Related papers (2024-09-30T17:57:28Z) - Unified Task and Motion Planning using Object-centric Abstractions of
Motion Constraints [56.283944756315066]
We propose an alternative TAMP approach that unifies task and motion planning into a single search.
Our approach is based on an object-centric abstraction of motion constraints that permits leveraging the computational efficiency of off-the-shelf AI search to yield physically feasible plans.
arXiv Detail & Related papers (2023-12-29T14:00:20Z) - Learning adaptive planning representations with natural language
guidance [90.24449752926866]
This paper describes Ada, a framework for automatically constructing task-specific planning representations.
Ada interactively learns a library of planner-compatible high-level action abstractions and low-level controllers adapted to a particular domain of planning tasks.
arXiv Detail & Related papers (2023-12-13T23:35:31Z) - Planning as In-Painting: A Diffusion-Based Embodied Task Planning
Framework for Environments under Uncertainty [56.30846158280031]
Task planning for embodied AI has been one of the most challenging problems.
We propose a task-agnostic method named 'planning as in-painting'
The proposed framework achieves promising performances in various embodied AI tasks.
arXiv Detail & Related papers (2023-12-02T10:07:17Z) - Parting with Misconceptions about Learning-based Vehicle Motion Planning [30.39229175273061]
nuPlan marks a new era in vehicle motion planning research.
Existing systems struggle to simultaneously meet both requirements.
We propose an extremely simple and efficient planner which outperforms an extensive set of competitors.
arXiv Detail & Related papers (2023-06-13T17:57:03Z) - A Framework for Neurosymbolic Robot Action Planning using Large Language Models [3.0501524254444767]
We present a framework aimed at bridging the gap between symbolic task planning and machine learning approaches.
The rationale is training Large Language Models (LLMs) into a neurosymbolic task planner compatible with the Planning Domain Definition Language (PDDL)
Preliminary results in selected domains show that our method can: (i) solve 95.5% of problems in a test data set of 1,000 samples; (ii) produce plans up to 13.5% shorter than a traditional symbolic planner; (iii) reduce average overall waiting times for a plan availability by up to 61.4%.
arXiv Detail & Related papers (2023-03-01T11:54:22Z) - Learning to Search in Task and Motion Planning with Streams [20.003445874753233]
Task and motion planning problems in robotics combine symbolic planning over discrete task variables with motion optimization over continuous state and action variables.
We propose a geometrically informed symbolic planner that expands the set of objects and facts in a best-first manner.
We apply our algorithm on a 7DOF robotic arm in block-stacking manipulation tasks.
arXiv Detail & Related papers (2021-11-25T15:58:31Z) - Visual scoping operations for physical assembly [0.0]
We propose visual scoping, a strategy that interleaves planning and acting by alternately defining a spatial region as the next subgoal.
We find that visual scoping achieves comparable task performance to the subgoal planner while requiring only a fraction of the total computational cost.
arXiv Detail & Related papers (2021-06-10T10:50:35Z) - NeRP: Neural Rearrangement Planning for Unknown Objects [49.191284597526]
We propose NeRP (Neural Rearrangement Planning), a deep learning based approach for multi-step neural object rearrangement planning.
NeRP works with never-before-seen objects, that is trained on simulation data, and generalizes to the real world.
arXiv Detail & Related papers (2021-06-02T17:56:27Z) - Task Scoping: Generating Task-Specific Abstractions for Planning [19.411900372400183]
Planning to solve any specific task using an open-scope world model is computationally intractable.
We propose task scoping: a method that exploits knowledge of the initial condition, goal condition, and transition-dynamics structure of a task.
We prove that task scoping never deletes relevant factors or actions, characterize its computational complexity, and characterize the planning problems for which it is especially useful.
arXiv Detail & Related papers (2020-10-17T21:19:25Z) - Long-Horizon Visual Planning with Goal-Conditioned Hierarchical
Predictors [124.30562402952319]
The ability to predict and plan into the future is fundamental for agents acting in the world.
Current learning approaches for visual prediction and planning fail on long-horizon tasks.
We propose a framework for visual prediction and planning that is able to overcome both of these limitations.
arXiv Detail & Related papers (2020-06-23T17:58:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.