Inventing Relational State and Action Abstractions for Effective and
Efficient Bilevel Planning
- URL: http://arxiv.org/abs/2203.09634v1
- Date: Thu, 17 Mar 2022 22:13:09 GMT
- Title: Inventing Relational State and Action Abstractions for Effective and
Efficient Bilevel Planning
- Authors: Tom Silver, Rohan Chitnis, Nishanth Kumar, Willie McClinton, Tomas
Lozano-Perez, Leslie Pack Kaelbling, Joshua Tenenbaum
- Abstract summary: We develop a novel framework for learning state and action abstractions.
We learn relational, neuro-symbolic abstractions that generalize over object identities and numbers.
We show that our learned abstractions are able to quickly solve held-out tasks of longer horizons.
- Score: 26.715198108255162
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Effective and efficient planning in continuous state and action spaces is
fundamentally hard, even when the transition model is deterministic and known.
One way to alleviate this challenge is to perform bilevel planning with
abstractions, where a high-level search for abstract plans is used to guide
planning in the original transition space. In this paper, we develop a novel
framework for learning state and action abstractions that are explicitly
optimized for both effective (successful) and efficient (fast) bilevel
planning. Given demonstrations of tasks in an environment, our data-efficient
approach learns relational, neuro-symbolic abstractions that generalize over
object identities and numbers. The symbolic components resemble the STRIPS
predicates and operators found in AI planning, and the neural components refine
the abstractions into actions that can be executed in the environment.
Experimentally, we show across four robotic planning environments that our
learned abstractions are able to quickly solve held-out tasks of longer
horizons than were seen in the demonstrations, and can even outperform the
efficiency of abstractions that we manually specified. We also find that as the
planner configuration varies, the learned abstractions adapt accordingly,
indicating that our abstraction learning method is both "task-aware" and
"planner-aware." Code: https://tinyurl.com/predicators-release
Related papers
- VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning [86.59849798539312]
We present Neuro-Symbolic Predicates, a first-order abstraction language that combines the strengths of symbolic and neural knowledge representations.
We show that our approach offers better sample complexity, stronger out-of-distribution generalization, and improved interpretability.
arXiv Detail & Related papers (2024-10-30T16:11:05Z) - Embodied Instruction Following in Unknown Environments [66.60163202450954]
We propose an embodied instruction following (EIF) method for complex tasks in the unknown environment.
We build a hierarchical embodied instruction following framework including the high-level task planner and the low-level exploration controller.
For the task planner, we generate the feasible step-by-step plans for human goal accomplishment according to the task completion process and the known visual clues.
arXiv Detail & Related papers (2024-06-17T17:55:40Z) - Learning Planning Abstractions from Language [28.855381137615275]
This paper presents a framework for learning state and action abstractions in sequential decision-making domains.
Our framework, planning abstraction from language (PARL), utilizes language-annotated demonstrations to automatically discover a symbolic and abstract action space.
arXiv Detail & Related papers (2024-05-06T21:24:22Z) - Learning with Language-Guided State Abstractions [58.199148890064826]
Generalizable policy learning in high-dimensional observation spaces is facilitated by well-designed state representations.
Our method, LGA, uses a combination of natural language supervision and background knowledge from language models to automatically build state representations tailored to unseen tasks.
Experiments on simulated robotic tasks show that LGA yields state abstractions similar to those designed by humans, but in a fraction of the time.
arXiv Detail & Related papers (2024-02-28T23:57:04Z) - From Reals to Logic and Back: Inventing Symbolic Vocabularies, Actions,
and Models for Planning from Raw Data [20.01856556195228]
This paper presents the first approach for autonomously learning logic-based relational representations for abstract states and actions.
The learned representations constitute auto-invented PDDL-like domain models.
Empirical results in deterministic settings show that powerful abstract representations can be learned from just a handful of robot trajectories.
arXiv Detail & Related papers (2024-02-19T06:28:21Z) - Learning adaptive planning representations with natural language
guidance [90.24449752926866]
This paper describes Ada, a framework for automatically constructing task-specific planning representations.
Ada interactively learns a library of planner-compatible high-level action abstractions and low-level controllers adapted to a particular domain of planning tasks.
arXiv Detail & Related papers (2023-12-13T23:35:31Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - Hierarchical Imitation Learning with Vector Quantized Models [77.67190661002691]
We propose to use reinforcement learning to identify subgoals in expert trajectories.
We build a vector-quantized generative model for the identified subgoals to perform subgoal-level planning.
In experiments, the algorithm excels at solving complex, long-horizon decision-making problems outperforming state-of-the-art.
arXiv Detail & Related papers (2023-01-30T15:04:39Z) - Learning Efficient Abstract Planning Models that Choose What to Predict [28.013014215441505]
We show that existing symbolic operator learning approaches fall short in many robotics domains.
This is primarily because they attempt to learn operators that exactly predict all observed changes in the abstract state.
We propose to learn operators that 'choose what to predict' by only modelling changes necessary for abstract planning to achieve specified goals.
arXiv Detail & Related papers (2022-08-16T13:12:59Z) - Using Deep Learning to Bootstrap Abstractions for Hierarchical Robot
Planning [27.384742641275228]
We present a new approach for bootstrapping the entire hierarchical planning process.
It shows how abstract states and actions for new environments can be computed automatically.
It uses the learned abstractions in a novel multi-source bi-directional hierarchical robot planning algorithm.
arXiv Detail & Related papers (2022-02-02T08:11:20Z) - Active Learning of Abstract Plan Feasibility [17.689758291966502]
We present an active learning approach to efficiently acquire an APF predictor through task-independent, curious exploration on a robot.
We leverage an infeasible subsequence property to prune candidate plans in the active learning strategy, allowing our system to learn from less data.
In a stacking domain where objects have non-uniform mass distributions, we show that our system permits real robot learning of an APF model in four hundred self-supervised interactions.
arXiv Detail & Related papers (2021-07-01T18:17:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.