Hierarchical Object-Oriented POMDP Planning for Object Rearrangement
- URL: http://arxiv.org/abs/2412.01348v2
- Date: Wed, 08 Jan 2025 18:20:46 GMT
- Title: Hierarchical Object-Oriented POMDP Planning for Object Rearrangement
- Authors: Rajesh Mangannavar, Alan Fern, Prasad Tadepalli,
- Abstract summary: We introduce a novel Hierarchical Object-Oriented Partially Observed Markov Decision Process (HOO-POMDP) planning approach.
This approach comprises of (a) an object-oriented POMDP planner generating sub-goals, (b) a set of low-level policies for sub-goal achievement, and (c) an abstraction system converting the continuous low-level world into a representation suitable for abstract planning.
We evaluate our system on varying numbers of objects, rooms, and problem types in AI2-THOR simulated environments with promising results.
- Score: 23.160007389272575
- License:
- Abstract: We present an online planning framework for solving multi-object rearrangement problems in partially observable, multi-room environments. Current object rearrangement solutions, primarily based on Reinforcement Learning or hand-coded planning methods, often lack adaptability to diverse challenges. To address this limitation, we introduce a novel Hierarchical Object-Oriented Partially Observed Markov Decision Process (HOO-POMDP) planning approach. This approach comprises of (a) an object-oriented POMDP planner generating sub-goals, (b) a set of low-level policies for sub-goal achievement, and (c) an abstraction system converting the continuous low-level world into a representation suitable for abstract planning. We evaluate our system on varying numbers of objects, rooms, and problem types in AI2-THOR simulated environments with promising results.
Related papers
- Platform-Aware Mission Planning [50.56223680851687]
We introduce the problem of Platform-Aware Mission Planning (PAMP), addressing it in the setting of temporal durative actions.
The first baseline approach amalgamates the mission and platform levels, while the second is based on an abstraction-refinement loop.
We prove the soundness and completeness of the proposed approaches and validate them experimentally.
arXiv Detail & Related papers (2025-01-16T16:20:37Z) - Unified Task and Motion Planning using Object-centric Abstractions of
Motion Constraints [56.283944756315066]
We propose an alternative TAMP approach that unifies task and motion planning into a single search.
Our approach is based on an object-centric abstraction of motion constraints that permits leveraging the computational efficiency of off-the-shelf AI search to yield physically feasible plans.
arXiv Detail & Related papers (2023-12-29T14:00:20Z) - Planning as In-Painting: A Diffusion-Based Embodied Task Planning
Framework for Environments under Uncertainty [56.30846158280031]
Task planning for embodied AI has been one of the most challenging problems.
We propose a task-agnostic method named 'planning as in-painting'
The proposed framework achieves promising performances in various embodied AI tasks.
arXiv Detail & Related papers (2023-12-02T10:07:17Z) - Compositional Foundation Models for Hierarchical Planning [52.18904315515153]
We propose a foundation model which leverages expert foundation model trained on language, vision and action data individually together to solve long-horizon tasks.
We use a large language model to construct symbolic plans that are grounded in the environment through a large video diffusion model.
Generated video plans are then grounded to visual-motor control, through an inverse dynamics model that infers actions from generated videos.
arXiv Detail & Related papers (2023-09-15T17:44:05Z) - Effective Baselines for Multiple Object Rearrangement Planning in
Partially Observable Mapped Environments [5.32429768581469]
This paper aims to enable home-assistive intelligent agents to efficiently plan for rearrangement under partial observability.
We investigate monolithic and modular deep reinforcement learning (DRL) methods for planning in our setting.
We find that monolithic DRL methods do not succeed at long-horizon planning needed for multi-object rearrangement.
We also show that our greedy modular agents are empirically optimal when the objects that need to be rearranged are uniformly distributed in the environment.
arXiv Detail & Related papers (2023-01-24T08:03:34Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - Multiple Plans are Better than One: Diverse Stochastic Planning [26.887796946596243]
In planning problems, it is often challenging to fully model the desired specifications.
In particular, in human-robot interaction, such difficulty may arise due to human's preferences that are either private or complex to model.
We formulate a problem, called diverse planning, that aims to generate a set of representative behaviors that are near-optimal.
arXiv Detail & Related papers (2020-12-31T07:29:11Z) - Divide-and-Conquer Monte Carlo Tree Search For Goal-Directed Planning [78.65083326918351]
We consider alternatives to an implicit sequential planning assumption.
We propose Divide-and-Conquer Monte Carlo Tree Search (DC-MCTS) for approximating the optimal plan.
We show that this algorithmic flexibility over planning order leads to improved results in navigation tasks in grid-worlds.
arXiv Detail & Related papers (2020-04-23T18:08:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.