Classical Planning in Deep Latent Space
- URL: http://arxiv.org/abs/2107.00110v1
- Date: Wed, 30 Jun 2021 21:31:21 GMT
- Title: Classical Planning in Deep Latent Space
- Authors: Masataro Asai, Hiroshi Kajino, Alex Fukunaga, Christian Muise
- Abstract summary: Latplan is an unsupervised architecture combining deep learning and classical planning.
Latplan finds a plan to the goal state in a symbolic latent space and returns a visualized plan execution.
- Score: 33.06766829037679
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current domain-independent, classical planners require symbolic models of the
problem domain and instance as input, resulting in a knowledge acquisition
bottleneck. Meanwhile, although deep learning has achieved significant success
in many fields, the knowledge is encoded in a subsymbolic representation which
is incompatible with symbolic systems such as planners. We propose Latplan, an
unsupervised architecture combining deep learning and classical planning. Given
only an unlabeled set of image pairs showing a subset of transitions allowed in
the environment (training inputs), Latplan learns a complete propositional PDDL
action model of the environment. Later, when a pair of images representing the
initial and the goal states (planning inputs) is given, Latplan finds a plan to
the goal state in a symbolic latent space and returns a visualized plan
execution. We evaluate Latplan using image-based versions of 6 planning
domains: 8-puzzle, 15-Puzzle, Blocksworld, Sokoban and Two variations of
LightsOut.
Related papers
- PDDLEGO: Iterative Planning in Textual Environments [56.12148805913657]
Planning in textual environments has been shown to be a long-standing challenge even for current models.
We propose PDDLEGO that iteratively construct a planning representation that can lead to a partial plan for a given sub-goal.
We show that plans produced by few-shot PDDLEGO are 43% more efficient than generating plans end-to-end on the Coin Collector simulation.
arXiv Detail & Related papers (2024-05-30T08:01:20Z) - Planning as In-Painting: A Diffusion-Based Embodied Task Planning
Framework for Environments under Uncertainty [56.30846158280031]
Task planning for embodied AI has been one of the most challenging problems.
We propose a task-agnostic method named 'planning as in-painting'
The proposed framework achieves promising performances in various embodied AI tasks.
arXiv Detail & Related papers (2023-12-02T10:07:17Z) - Compositional Foundation Models for Hierarchical Planning [52.18904315515153]
We propose a foundation model which leverages expert foundation model trained on language, vision and action data individually together to solve long-horizon tasks.
We use a large language model to construct symbolic plans that are grounded in the environment through a large video diffusion model.
Generated video plans are then grounded to visual-motor control, through an inverse dynamics model that infers actions from generated videos.
arXiv Detail & Related papers (2023-09-15T17:44:05Z) - Planning with Learned Object Importance in Large Problem Instances using
Graph Neural Networks [28.488201307961624]
Real-world planning problems often involve hundreds or even thousands of objects.
We propose a graph neural network architecture for predicting object importance in a single inference pass.
Our approach treats the planner and transition model as black boxes, and can be used with any off-the-shelf planner.
arXiv Detail & Related papers (2020-09-11T18:55:08Z) - Plan2Vec: Unsupervised Representation Learning by Latent Plans [106.37274654231659]
We introduce plan2vec, an unsupervised representation learning approach that is inspired by reinforcement learning.
Plan2vec constructs a weighted graph on an image dataset using near-neighbor distances, and then extrapolates this local metric to a global embedding by distilling path-integral over planned path.
We demonstrate the effectiveness of plan2vec on one simulated and two challenging real-world image datasets.
arXiv Detail & Related papers (2020-05-07T17:52:23Z) - Hallucinative Topological Memory for Zero-Shot Visual Planning [86.20780756832502]
In visual planning (VP), an agent learns to plan goal-directed behavior from observations of a dynamical system obtained offline.
Most previous works on VP approached the problem by planning in a learned latent space, resulting in low-quality visual plans.
Here, we propose a simple VP method that plans directly in image space and displays competitive performance.
arXiv Detail & Related papers (2020-02-27T18:54:42Z) - STRIPS Action Discovery [67.73368413278631]
Recent approaches have shown the success of classical planning at synthesizing action models even when all intermediate states are missing.
We propose a new algorithm to unsupervisedly synthesize STRIPS action models with a classical planner when action signatures are unknown.
arXiv Detail & Related papers (2020-01-30T17:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.