Visual Environment-Interactive Planning for Embodied Complex-Question Answering
- URL: http://arxiv.org/abs/2504.00775v1
- Date: Tue, 01 Apr 2025 13:26:28 GMT
- Title: Visual Environment-Interactive Planning for Embodied Complex-Question Answering
- Authors: Ning Lan, Baoshan Ou, Xuemei Xie, Guangming Shi,
- Abstract summary: This study focuses on Embodied Complex-Question Answering task.<n>The core of this task lies in making appropriate plans based on the perception of the visual environment.<n>Considering multi-step planning, the framework for formulating plans in a sequential manner is proposed in this paper.
- Score: 28.929345360469807
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study focuses on Embodied Complex-Question Answering task, which means the embodied robot need to understand human questions with intricate structures and abstract semantics. The core of this task lies in making appropriate plans based on the perception of the visual environment. Existing methods often generate plans in a once-for-all manner, i.e., one-step planning. Such approach rely on large models, without sufficient understanding of the environment. Considering multi-step planning, the framework for formulating plans in a sequential manner is proposed in this paper. To ensure the ability of our framework to tackle complex questions, we create a structured semantic space, where hierarchical visual perception and chain expression of the question essence can achieve iterative interaction. This space makes sequential task planning possible. Within the framework, we first parse human natural language based on a visual hierarchical scene graph, which can clarify the intention of the question. Then, we incorporate external rules to make a plan for current step, weakening the reliance on large models. Every plan is generated based on feedback from visual perception, with multiple rounds of interaction until an answer is obtained. This approach enables continuous feedback and adjustment, allowing the robot to optimize its action strategy. To test our framework, we contribute a new dataset with more complex questions. Experimental results demonstrate that our approach performs excellently and stably on complex tasks. And also, the feasibility of our approach in real-world scenarios has been established, indicating its practical applicability.
Related papers
- Flex: End-to-End Text-Instructed Visual Navigation with Foundation Models [59.892436892964376]
We investigate the minimal data requirements and architectural adaptations necessary to achieve robust closed-loop performance with vision-based control policies.
Our findings are synthesized in Flex (Fly-lexically), a framework that uses pre-trained Vision Language Models (VLMs) as frozen patch-wise feature extractors.
We demonstrate the effectiveness of this approach on quadrotor fly-to-target tasks, where agents trained via behavior cloning successfully generalize to real-world scenes.
arXiv Detail & Related papers (2024-10-16T19:59:31Z) - Embodied Instruction Following in Unknown Environments [66.60163202450954]
We propose an embodied instruction following (EIF) method for complex tasks in the unknown environment.
We build a hierarchical embodied instruction following framework including the high-level task planner and the low-level exploration controller.
For the task planner, we generate the feasible step-by-step plans for human goal accomplishment according to the task completion process and the known visual clues.
arXiv Detail & Related papers (2024-06-17T17:55:40Z) - Socratic Planner: Self-QA-Based Zero-Shot Planning for Embodied Instruction Following [17.608330952846075]
Embodied Instruction Following (EIF) is the task of executing natural language instructions by navigating and interacting with objects in interactive environments.
A key challenge in EIF is compositional task planning, typically addressed through supervised learning or few-shot in-context learning with labeled data.
We introduce the Socratic Planner, a self-QA-based zero-shot planning method that infers an appropriate plan without any further training.
arXiv Detail & Related papers (2024-04-21T08:10:20Z) - Deep hybrid models: infer and plan in a dynamic world [0.0]
We present a solution, based on active inference, for complex control tasks.<n>The proposed architecture exploits hybrid (discrete and continuous) processing.<n>We show that the model can tackle the presented task under different conditions.
arXiv Detail & Related papers (2024-02-01T15:15:25Z) - Learning Top-k Subtask Planning Tree based on Discriminative Representation Pre-training for Decision Making [9.302910360945042]
Planning with prior knowledge extracted from complicated real-world tasks is crucial for humans to make accurate decisions.
We introduce a multiple-encoder and individual-predictor regime to learn task-essential representations from sufficient data for simple subtasks.
We also use the attention mechanism to generate a top-k subtask planning tree, which customizes subtask execution plans in guiding complex decisions on unseen tasks.
arXiv Detail & Related papers (2023-12-18T09:00:31Z) - Learning adaptive planning representations with natural language
guidance [90.24449752926866]
This paper describes Ada, a framework for automatically constructing task-specific planning representations.
Ada interactively learns a library of planner-compatible high-level action abstractions and low-level controllers adapted to a particular domain of planning tasks.
arXiv Detail & Related papers (2023-12-13T23:35:31Z) - Planning as In-Painting: A Diffusion-Based Embodied Task Planning
Framework for Environments under Uncertainty [56.30846158280031]
Task planning for embodied AI has been one of the most challenging problems.
We propose a task-agnostic method named 'planning as in-painting'
The proposed framework achieves promising performances in various embodied AI tasks.
arXiv Detail & Related papers (2023-12-02T10:07:17Z) - CoPAL: Corrective Planning of Robot Actions with Large Language Models [7.944803163555092]
We propose a system architecture that orchestrates a seamless interplay between cognitive levels, encompassing reasoning, planning, and motion generation.
At its core lies a novel replanning strategy that handles physically grounded, logical, and semantic errors in the generated plans.
arXiv Detail & Related papers (2023-10-11T07:39:42Z) - Compositional Foundation Models for Hierarchical Planning [52.18904315515153]
We propose a foundation model which leverages expert foundation model trained on language, vision and action data individually together to solve long-horizon tasks.
We use a large language model to construct symbolic plans that are grounded in the environment through a large video diffusion model.
Generated video plans are then grounded to visual-motor control, through an inverse dynamics model that infers actions from generated videos.
arXiv Detail & Related papers (2023-09-15T17:44:05Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - Deep compositional robotic planners that follow natural language
commands [21.481360281719006]
We show how a sampling-based robotic planner can be augmented to learn to understand a sequence of natural language commands.
Our approach combines a deep network structured according to the parse of a complex command that includes objects, verbs, spatial relations, and attributes.
arXiv Detail & Related papers (2020-02-12T19:56:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.