Zero-Shot Iterative Formalization and Planning in Partially Observable Environments
- URL: http://arxiv.org/abs/2505.13126v2
- Date: Tue, 20 May 2025 13:53:50 GMT
- Title: Zero-Shot Iterative Formalization and Planning in Partially Observable Environments
- Authors: Liancheng Gong, Wang Zhu, Jesse Thomason, Li Zhang,
- Abstract summary: We propose PDDLego+, a framework to formalize, plan, grow, and refine PDDL representations in a zero-shot manner.<n>We show that PDDLego+ improves goal reaching success and exhibits robustness against problem complexity.
- Score: 11.066479432278301
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Using LLMs not to predict plans but to formalize an environment into the Planning Domain Definition Language (PDDL) has been shown to improve performance and control. Existing work focuses on fully observable environments; we tackle the more realistic and challenging partially observable environments that lack of complete, reliable information. We propose PDDLego+, a framework to iteratively formalize, plan, grow, and refine PDDL representations in a zero-shot manner, without needing access to any existing trajectories. On two textual simulated environments, we show that PDDLego+ improves goal reaching success and exhibits robustness against problem complexity. We also show that the domain knowledge captured after a successful trial can benefit future tasks.
Related papers
- Latent Diffusion Planning for Imitation Learning [78.56207566743154]
Latent Diffusion Planning (LDP) is a modular approach consisting of a planner and inverse dynamics model.<n>By separating planning from action prediction, LDP can benefit from the denser supervision signals of suboptimal and action-free data.<n>On simulated visual robotic manipulation tasks, LDP outperforms state-of-the-art imitation learning approaches.
arXiv Detail & Related papers (2025-04-23T17:53:34Z) - Seeing is Believing: Belief-Space Planning with Foundation Models as Uncertainty Estimators [34.28879194786174]
Generalizable robotic mobile manipulation in open-world environments poses significant challenges due to long horizons, complex goals, and partial observability.<n>A promising approach to address these challenges involves planning with a library of parameterized skills, where a task planner sequences these skills to achieve goals specified in structured languages.<n>This paper introduces a novel framework that leverages vision-language models to estimate uncertainty and facilitate symbolic grounding.
arXiv Detail & Related papers (2025-04-04T07:48:53Z) - VISO-Grasp: Vision-Language Informed Spatial Object-centric 6-DoF Active View Planning and Grasping in Clutter and Invisibility [31.50489359729733]
VISO-Grasp is a vision-informed system designed to address visibility constraints for grasping in severely occluded environments.<n>We introduce a multi-view uncertainty-driven grasp fusion mechanism that refines grasp confidence and directional uncertainty in real-time.<n>VISO-Grasp achieves a success rate of $87.5%$ in target-oriented grasping with the fewest grasp attempts outperforming baselines.
arXiv Detail & Related papers (2025-03-16T18:46:54Z) - An Extensive Evaluation of PDDL Capabilities in off-the-shelf LLMs [11.998185452551878]
Large language models (LLMs) have exhibited proficiency in code generation and chain-of-thought reasoning.<n>This study evaluates the potential of LLMs to understand and generate Planning Domain Definition Language (PDDL)
arXiv Detail & Related papers (2025-02-27T15:13:07Z) - On the Limit of Language Models as Planning Formalizers [4.145422873316857]
Large Language Models fail to create verifiable plans in grounded environments.<n>An emerging line of work shows success in using LLM as a formalizer to generate a formal representation of the planning domain.<n>We observe that large enough models can effectively formalize descriptions as PDDL, outperforming those directly generating plans.
arXiv Detail & Related papers (2024-12-13T05:50:22Z) - PDDLEGO: Iterative Planning in Textual Environments [56.12148805913657]
Planning in textual environments has been shown to be a long-standing challenge even for current models.
We propose PDDLEGO that iteratively construct a planning representation that can lead to a partial plan for a given sub-goal.
We show that plans produced by few-shot PDDLEGO are 43% more efficient than generating plans end-to-end on the Coin Collector simulation.
arXiv Detail & Related papers (2024-05-30T08:01:20Z) - PROC2PDDL: Open-Domain Planning Representations from Texts [56.627183903841164]
Proc2PDDL is the first dataset containing open-domain procedural texts paired with expert-annotated PDDL representations.
We show that Proc2PDDL is highly challenging, with GPT-3.5's success rate close to 0% and GPT-4's around 35%.
arXiv Detail & Related papers (2024-02-29T19:40:25Z) - Real-World Planning with PDDL+ and Beyond [55.73913765642435]
We present Nyx, a novel PDDL+ planner built to emphasize lightness, simplicity, and, most importantly, adaptability.
Nyx can be tailored to virtually any potential real-world application requiring some form of AI Planning, paving the way for wider adoption of planning methods for solving real-world problems.
arXiv Detail & Related papers (2024-02-19T07:35:49Z) - Planning as In-Painting: A Diffusion-Based Embodied Task Planning
Framework for Environments under Uncertainty [56.30846158280031]
Task planning for embodied AI has been one of the most challenging problems.
We propose a task-agnostic method named 'planning as in-painting'
The proposed framework achieves promising performances in various embodied AI tasks.
arXiv Detail & Related papers (2023-12-02T10:07:17Z) - DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation [107.5934592892763]
We propose DREAMWALKER -- a world model based VLN-CE agent.
The world model is built to summarize the visual, topological, and dynamic properties of the complicated continuous environment.
It can simulate and evaluate possible plans entirely in such internal abstract world, before executing costly actions.
arXiv Detail & Related papers (2023-08-14T23:45:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.