Option Discovery for Autonomous Generation of Symbolic Knowledge
- URL: http://arxiv.org/abs/2206.01815v1
- Date: Fri, 3 Jun 2022 20:46:34 GMT
- Title: Option Discovery for Autonomous Generation of Symbolic Knowledge
- Authors: Gabriele Sartor, Davide Zollo, Marta Cialdea Mayer, Angelo Oddi,
Riccardo Rasconi and Vieri Giuliano Santucci
- Abstract summary: We present an empirical study where we demonstrate the possibility of developing an artificial agent that is capable to autonomously explore an experimental scenario.
During the exploration, the agent is able to discover and learn interesting options allowing to interact with the environment without any pre-assigned goal, then abstract and re-use the acquired knowledge to solve possible tasks assigned ex-post.
- Score: 3.1317409221921135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work we present an empirical study where we demonstrate the
possibility of developing an artificial agent that is capable to autonomously
explore an experimental scenario. During the exploration, the agent is able to
discover and learn interesting options allowing to interact with the
environment without any pre-assigned goal, then abstract and re-use the
acquired knowledge to solve possible tasks assigned ex-post. We test the system
in the so-called Treasure Game domain described in the recent literature and we
empirically demonstrate that the discovered options can be abstracted in an
probabilistic symbolic planning model (using the PPDDL language), which allowed
the agent to generate symbolic plans to achieve extrinsic goals.
Related papers
- VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning [86.59849798539312]
We present Neuro-Symbolic Predicates, a first-order abstraction language that combines the strengths of symbolic and neural knowledge representations.
We show that our approach offers better sample complexity, stronger out-of-distribution generalization, and improved interpretability.
arXiv Detail & Related papers (2024-10-30T16:11:05Z) - Synthesizing Evolving Symbolic Representations for Autonomous Systems [2.4233709516962785]
This paper presents an open-ended learning system able to synthesize from scratch its experience into a PPDDL representation and update it over time.
The system explores the environment and iteratively: (a) discover options, (b) explore the environment using options, (c) abstract the knowledge collected and (d) plan.
arXiv Detail & Related papers (2024-09-18T07:23:26Z) - Embodied Instruction Following in Unknown Environments [66.60163202450954]
We propose an embodied instruction following (EIF) method for complex tasks in the unknown environment.
We build a hierarchical embodied instruction following framework including the high-level task planner and the low-level exploration controller.
For the task planner, we generate the feasible step-by-step plans for human goal accomplishment according to the task completion process and the known visual clues.
arXiv Detail & Related papers (2024-06-17T17:55:40Z) - Learning Geometric Representations of Objects via Interaction [25.383613570119266]
We address the problem of learning representations from observations of a scene involving an agent and an external object the agent interacts with.
We propose a representation learning framework extracting the location in physical space of both the agent and the object from unstructured observations of arbitrary nature.
arXiv Detail & Related papers (2023-09-11T09:45:22Z) - DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation [107.5934592892763]
We propose DREAMWALKER -- a world model based VLN-CE agent.
The world model is built to summarize the visual, topological, and dynamic properties of the complicated continuous environment.
It can simulate and evaluate possible plans entirely in such internal abstract world, before executing costly actions.
arXiv Detail & Related papers (2023-08-14T23:45:01Z) - Embodied Agents for Efficient Exploration and Smart Scene Description [47.82947878753809]
We tackle a setting for visual navigation in which an autonomous agent needs to explore and map an unseen indoor environment.
We propose and evaluate an approach that combines recent advances in visual robotic exploration and image captioning on images.
Our approach can generate smart scene descriptions that maximize semantic knowledge of the environment and avoid repetitions.
arXiv Detail & Related papers (2023-01-17T19:28:01Z) - Planning for Learning Object Properties [117.27898922118946]
We formalize the problem of automatically training a neural network to recognize object properties as a symbolic planning problem.
We use planning techniques to produce a strategy for automating the training dataset creation and the learning process.
We provide an experimental evaluation in both a simulated and a real environment.
arXiv Detail & Related papers (2023-01-15T09:37:55Z) - Abstract Interpretation for Generalized Heuristic Search in Model-Based
Planning [50.96320003643406]
Domain-general model-based planners often derive their generality by constructing searchs through the relaxation of symbolic world models.
We illustrate how abstract interpretation can serve as a unifying framework for these abstractions, extending the reach of search to richer world models.
Theses can also be integrated with learning, allowing agents to jumpstart planning in novel world models via abstraction-derived information.
arXiv Detail & Related papers (2022-08-05T00:22:11Z) - Learning Abstract and Transferable Representations for Planning [25.63560394067908]
We propose a framework for autonomously learning state abstractions of an agent's environment.
These abstractions are task-independent, and so can be reused to solve new tasks.
We show how to combine these portable representations with problem-specific ones to generate a sound description of a specific task.
arXiv Detail & Related papers (2022-05-04T14:40:04Z) - Online Grounding of PDDL Domains by Acting and Sensing in Unknown
Environments [62.11612385360421]
This paper proposes a framework that allows an agent to perform different tasks.
We integrate machine learning models to abstract the sensory data, symbolic planning for goal achievement and path planning for navigation.
We evaluate the proposed method in accurate simulated environments, where the sensors are RGB-D on-board camera, GPS and compass.
arXiv Detail & Related papers (2021-12-18T21:48:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.