E-PDDL: A Standardized Way of Defining Epistemic Planning Problems
- URL: http://arxiv.org/abs/2107.08739v1
- Date: Mon, 19 Jul 2021 10:20:20 GMT
- Title: E-PDDL: A Standardized Way of Defining Epistemic Planning Problems
- Authors: Francesco Fabiano, Biplav Srivastava, Jonathan Lenchner, Lior Horesh,
Francesca Rossi, Marianna Bergamaschi Ganapini
- Abstract summary: Epistemic Planning (EP) refers to an automated planning setting where the agent reasons in the space of knowledge states.
We propose a unified way of specifying EP problems - the Epistemic Planning Domain Language, EPDDL.
We show that E-PDDL can be supported by leading MEP planners and provide corresponding code that translates MEP problems into (MEP) problems that can be handled by several planners.
- Score: 11.381221864778976
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Epistemic Planning (EP) refers to an automated planning setting where the
agent reasons in the space of knowledge states and tries to find a plan to
reach a desirable state from the current state. Its general form, the
Multi-agent Epistemic Planning (MEP) problem involves multiple agents who need
to reason about both the state of the world and the information flow between
agents. In a MEP problem, multiple approaches have been developed recently with
varying restrictions, such as considering only the concept of knowledge while
not allowing the idea of belief, or not allowing for ``complex" modal operators
such as those needed to handle dynamic common knowledge. While the diversity of
approaches has led to a deeper understanding of the problem space, the lack of
a standardized way to specify MEP problems independently of solution approaches
has created difficulties in comparing performance of planners, identifying
promising techniques, exploring new strategies like ensemble methods, and
making it easy for new researchers to contribute to this research area. To
address the situation, we propose a unified way of specifying EP problems - the
Epistemic Planning Domain Definition Language, E-PDDL. We show that E-PPDL can
be supported by leading MEP planners and provide corresponding parser code that
translates EP problems specified in E-PDDL into (M)EP problems that can be
handled by several planners. This work is also useful in building more general
epistemic planning environments where we envision a meta-cognitive module that
takes a planning problem in E-PDDL, identifies and assesses some of its
features, and autonomously decides which planner is the best one to solve it.
Related papers
- Ask-before-Plan: Proactive Language Agents for Real-World Planning [68.08024918064503]
Proactive Agent Planning requires language agents to predict clarification needs based on user-agent conversation and agent-environment interaction.
We propose a novel multi-agent framework, Clarification-Execution-Planning (textttCEP), which consists of three agents specialized in clarification, execution, and planning.
arXiv Detail & Related papers (2024-06-18T14:07:28Z) - Contingency Planning Using Bi-level Markov Decision Processes for Space
Missions [16.62956274851929]
This work focuses on autonomous contingency planning for scientific missions.
It enables rapid policy computation from any off-nominal point in the state space in the event of a delay or deviation from the nominal mission plan.
arXiv Detail & Related papers (2024-02-26T06:42:30Z) - Planning as In-Painting: A Diffusion-Based Embodied Task Planning
Framework for Environments under Uncertainty [56.30846158280031]
Task planning for embodied AI has been one of the most challenging problems.
We propose a task-agnostic method named 'planning as in-painting'
The proposed framework achieves promising performances in various embodied AI tasks.
arXiv Detail & Related papers (2023-12-02T10:07:17Z) - AdaPlanner: Adaptive Planning from Feedback with Language Models [56.367020818139665]
Large language models (LLMs) have recently demonstrated the potential in acting as autonomous agents for sequential decision-making tasks.
We propose a closed-loop approach, AdaPlanner, which allows the LLM agent to refine its self-generated plan adaptively in response to environmental feedback.
To mitigate hallucination, we develop a code-style LLM prompt structure that facilitates plan generation across a variety of tasks, environments, and agent capabilities.
arXiv Detail & Related papers (2023-05-26T05:52:27Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in
Partially Observed Markov Decision Processes [65.91730154730905]
In applications of offline reinforcement learning to observational data, such as in healthcare or education, a general concern is that observed actions might be affected by unobserved factors.
Here we tackle this by considering off-policy evaluation in a partially observed Markov decision process (POMDP)
We extend the framework of proximal causal inference to our POMDP setting, providing a variety of settings where identification is made possible.
arXiv Detail & Related papers (2021-10-28T17:46:14Z) - Comprehensive Multi-Agent Epistemic Planning [0.0]
This manuscript is focused on a specialized kind of planning known as Multi-agent Epistemic Planning (MEP).
EP refers to an automated planning setting where the agent reasons in the space of knowledge/beliefs states and tries to find a plan to reach a desirable state from a starting one.
Its general form, the MEP problem, involves multiple agents who need to reason about both the state of the world and the information flows between agents.
arXiv Detail & Related papers (2021-09-17T01:50:18Z) - On Solving a Stochastic Shortest-Path Markov Decision Process as
Probabilistic Inference [5.517104116168873]
We propose solving the general Decision Shortest-Path Markov Process (SSP MDP) as probabilistic inference.
We discuss online and offline methods for planning under uncertainty.
arXiv Detail & Related papers (2021-09-13T11:07:52Z) - Modelling Multi-Agent Epistemic Planning in ASP [66.76082318001976]
This paper presents an implementation of a multi-shot Answer Set Programming-based planner that can reason in multi-agent epistemic settings.
The paper shows how the planner, exploiting an ad-hoc epistemic state representation and the efficiency of ASP solvers, has competitive performance results on benchmarks collected from the literature.
arXiv Detail & Related papers (2020-08-07T06:35:56Z) - Adaptive Informative Path Planning with Multimodal Sensing [36.16721115973077]
AIPPMS (MS for Multimodal Sensing)
We frame AIPPMS as a Partially Observable Markov Decision Process (POMDP) and solve it with online planning.
We evaluate our method on two domains: a simulated search-and-rescue scenario and a challenging extension to the classic RockSample problem.
arXiv Detail & Related papers (2020-03-21T20:28:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.