Encoding Compositionality in Classical Planning Solutions
- URL: http://arxiv.org/abs/2107.05850v1
- Date: Tue, 13 Jul 2021 05:05:11 GMT
- Title: Encoding Compositionality in Classical Planning Solutions
- Authors: Angeline Aguinaldo, William Regli
- Abstract summary: It is desirable to encode the trace of literals throughout the plan to capture the dependencies between actions selected.
The approach of this paper is to view the actions as maps between literals and the selected plan as a composition of those maps.
- Score: 0.8122270502556374
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classical AI planners provide solutions to planning problems in the form of
long and opaque text outputs. To aid in the understanding transferability of
planning solutions, it is necessary to have a rich and comprehensible
representation for both human and computers beyond the current line-by-line
text notation. In particular, it is desirable to encode the trace of literals
throughout the plan to capture the dependencies between actions selected. The
approach of this paper is to view the actions as maps between literals and the
selected plan as a composition of those maps. The mathematical theory, called
category theory, provides the relevant structures for capturing maps, their
compositions, and maps between compositions. We employ this theory to propose
an algorithm agnostic, model-based representation for domains, problems, and
plans expressed in the commonly used planning description language, PDDL. This
category theoretic representation is accompanied by a graphical syntax in
addition to a linear notation, similar to algebraic expressions, that can be
used to infer literals used at every step of the plan. This provides the
appropriate constructive abstraction and facilitates comprehension for human
operators. In this paper, we demonstrate this on a plan within the Blocksworld
domain.
Related papers
- Planning as In-Painting: A Diffusion-Based Embodied Task Planning
Framework for Environments under Uncertainty [56.30846158280031]
Task planning for embodied AI has been one of the most challenging problems.
We propose a task-agnostic method named 'planning as in-painting'
The proposed framework achieves promising performances in various embodied AI tasks.
arXiv Detail & Related papers (2023-12-02T10:07:17Z) - EIPE-text: Evaluation-Guided Iterative Plan Extraction for Long-Form
Narrative Text Generation [114.50719922069261]
We propose a new framework called Evaluation-guided Iterative Plan Extraction for long-form narrative text generation (EIPE-text)
EIPE-text has three stages: plan extraction, learning, and inference.
We evaluate the effectiveness of EIPE-text in the domains of novels and storytelling.
arXiv Detail & Related papers (2023-10-12T10:21:37Z) - Towards Ontology-Mediated Planning with OWL DL Ontologies (Extended
Version) [7.995360025953931]
We present a new approach in which the planning specification and ontology are kept separate, and are linked together using an interface.
This allows planning experts to work in a familiar formalism, while existing domains can be easily integrated and extended by experts.
The idea is to rewrite the whole-mediated planning problem into a classical planning problem to be processed by existing planning tools.
arXiv Detail & Related papers (2023-08-16T08:05:53Z) - A Categorical Representation Language and Computational System for
Knowledge-Based Planning [5.004278968175897]
We propose an alternative approach to representing and managing updates to world states during planning.
Based on the category-theoretic concepts of $mathsfC$-sets and double-pushout rewriting (DPO), our proposed representation can effectively handle structured knowledge about world states.
arXiv Detail & Related papers (2023-05-26T19:01:57Z) - Text Reading Order in Uncontrolled Conditions by Sparse Graph
Segmentation [71.40119152422295]
We propose a lightweight, scalable and generalizable approach to identify text reading order.
The model is language-agnostic and runs effectively across multi-language datasets.
It is small enough to be deployed on virtually any platform including mobile devices.
arXiv Detail & Related papers (2023-05-04T06:21:00Z) - Linear Spaces of Meanings: Compositional Structures in Vision-Language
Models [110.00434385712786]
We investigate compositional structures in data embeddings from pre-trained vision-language models (VLMs)
We first present a framework for understanding compositional structures from a geometric perspective.
We then explain what these structures entail probabilistically in the case of VLM embeddings, providing intuitions for why they arise in practice.
arXiv Detail & Related papers (2023-02-28T08:11:56Z) - Variational Cross-Graph Reasoning and Adaptive Structured Semantics
Learning for Compositional Temporal Grounding [143.5927158318524]
Temporal grounding is the task of locating a specific segment from an untrimmed video according to a query sentence.
We introduce a new Compositional Temporal Grounding task and construct two new dataset splits.
We argue that the inherent structured semantics inside the videos and language is the crucial factor to achieve compositional generalization.
arXiv Detail & Related papers (2023-01-22T08:02:23Z) - Categorical Tools for Natural Language Processing [0.0]
This thesis develops the translation between category theory and computational linguistics.
The three chapters deal with syntax, semantics and pragmatics.
The resulting functorial models can be composed to form games where equilibria are the solutions of language processing tasks.
arXiv Detail & Related papers (2022-12-13T15:12:37Z) - Classical Planning in Deep Latent Space [33.06766829037679]
Latplan is an unsupervised architecture combining deep learning and classical planning.
Latplan finds a plan to the goal state in a symbolic latent space and returns a visualized plan execution.
arXiv Detail & Related papers (2021-06-30T21:31:21Z) - Discrete Word Embedding for Logical Natural Language Understanding [5.8088738147746914]
We propose an unsupervised neural model for learning a discrete embedding of words.
Our embedding represents each word as a set of propositional statements describing a transition rule in classical/STRIPS planning formalism.
arXiv Detail & Related papers (2020-08-26T16:15:18Z) - Graph-Structured Referring Expression Reasoning in The Wild [105.95488002374158]
Grounding referring expressions aims to locate in an image an object referred to by a natural language expression.
We propose a scene graph guided modular network (SGMN) to perform reasoning over a semantic graph and a scene graph.
We also propose Ref-Reasoning, a large-scale real-world dataset for structured referring expression reasoning.
arXiv Detail & Related papers (2020-04-19T11:00:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.