Online Grounding of PDDL Domains by Acting and Sensing in Unknown
Environments
- URL: http://arxiv.org/abs/2112.10007v1
- Date: Sat, 18 Dec 2021 21:48:20 GMT
- Title: Online Grounding of PDDL Domains by Acting and Sensing in Unknown
Environments
- Authors: Leonardo Lamanna, Luciano Serafini, Alessandro Saetti, Alfonso
Gerevini, Paolo Traverso
- Abstract summary: This paper proposes a framework that allows an agent to perform different tasks.
We integrate machine learning models to abstract the sensory data, symbolic planning for goal achievement and path planning for navigation.
We evaluate the proposed method in accurate simulated environments, where the sensors are RGB-D on-board camera, GPS and compass.
- Score: 62.11612385360421
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To effectively use an abstract (PDDL) planning domain to achieve goals in an
unknown environment, an agent must instantiate such a domain with the objects
of the environment and their properties. If the agent has an egocentric and
partial view of the environment, it needs to act, sense, and abstract the
perceived data in the planning domain. Furthermore, the agent needs to compile
the plans computed by a symbolic planner into low level actions executable by
its actuators. This paper proposes a framework that aims to accomplish the
aforementioned perspective and allows an agent to perform different tasks. For
this purpose, we integrate machine learning models to abstract the sensory
data, symbolic planning for goal achievement and path planning for navigation.
We evaluate the proposed method in accurate simulated environments, where the
sensors are RGB-D on-board camera, GPS and compass.
Related papers
- Embodied Instruction Following in Unknown Environments [66.60163202450954]
We propose an embodied instruction following (EIF) method for complex tasks in the unknown environment.
We build a hierarchical embodied instruction following framework including the high-level task planner and the low-level exploration controller.
For the task planner, we generate the feasible step-by-step plans for human goal accomplishment according to the task completion process and the known visual clues.
arXiv Detail & Related papers (2024-06-17T17:55:40Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - Planning for Learning Object Properties [117.27898922118946]
We formalize the problem of automatically training a neural network to recognize object properties as a symbolic planning problem.
We use planning techniques to produce a strategy for automating the training dataset creation and the learning process.
We provide an experimental evaluation in both a simulated and a real environment.
arXiv Detail & Related papers (2023-01-15T09:37:55Z) - PlanT: Explainable Planning Transformers via Object-Level
Representations [64.93938686101309]
PlanT is a novel approach for planning in the context of self-driving.
PlanT is based on imitation learning with a compact object-level input representation.
Our results indicate that PlanT can focus on the most relevant object in the scene, even when this object is geometrically distant.
arXiv Detail & Related papers (2022-10-25T17:59:46Z) - Generating Executable Action Plans with Environmentally-Aware Language
Models [4.162663632560141]
Large Language Models (LLMs) trained using massive text datasets have recently shown promise in generating action plans for robotic agents.
We propose an approach to generate environmentally-aware action plans that agents are better able to execute.
arXiv Detail & Related papers (2022-10-10T18:56:57Z) - HARPS: An Online POMDP Framework for Human-Assisted Robotic Planning and
Sensing [1.3678064890824186]
The Human Assisted Robotic Planning and Sensing (HARPS) framework is presented for active semantic sensing and planning in human-robot teams.
This approach lets humans opportunistically impose model structure and extend the range of semantic soft data in uncertain environments.
Simulations of a UAV-enabled target search application in a large-scale partially structured environment show significant improvements in time and belief state estimates.
arXiv Detail & Related papers (2021-10-20T00:41:57Z) - Hierarchical Object-to-Zone Graph for Object Navigation [43.558927774552295]
In the unseen environment, when the target object is not in egocentric view, the agent may not be able to make wise decisions.
We propose a hierarchical object-to-zone (HOZ) graph to guide the agent in a coarse-to-fine manner.
Online-learning mechanism is also proposed to update HOZ according to the real-time observation in new environments.
arXiv Detail & Related papers (2021-09-05T13:02:17Z) - Long-Horizon Manipulation of Unknown Objects via Task and Motion
Planning with Estimated Affordances [26.082034134908785]
We show that a task-and-motion planner can be used to plan intelligent behaviors even in the absence of a priori knowledge regarding the set of manipulable objects.
We demonstrate that this strategy can enable a single system to perform a wide variety of real-world multi-step manipulation tasks.
arXiv Detail & Related papers (2021-08-09T16:13:47Z) - Object Goal Navigation using Goal-Oriented Semantic Exploration [98.14078233526476]
This work studies the problem of object goal navigation which involves navigating to an instance of the given object category in unseen environments.
We propose a modular system called, Goal-Oriented Semantic Exploration' which builds an episodic semantic map and uses it to explore the environment efficiently.
arXiv Detail & Related papers (2020-07-01T17:52:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.