Projection Abstractions in Planning Under the Lenses of Abstractions for MDPs
- URL: http://arxiv.org/abs/2412.02615v1
- Date: Tue, 03 Dec 2024 17:43:28 GMT
- Title: Projection Abstractions in Planning Under the Lenses of Abstractions for MDPs
- Authors: Giuseppe Canonaco, Alberto Pozanco, Daniel Borrajo,
- Abstract summary: The concept of abstraction has been independently developed both in the context of AI Planning and discounted Markov Decision Processes (MDPs)
This paper aims to look at projection abstractions in Planning through the lenses of discounted MDPs.
Starting from a projection abstraction built according to Classical or Probabilistic Planning techniques, we will show how the same abstraction can be obtained under the abstraction frameworks available for discounted MDPs.
- Score: 1.46184883556683
- License:
- Abstract: The concept of abstraction has been independently developed both in the context of AI Planning and discounted Markov Decision Processes (MDPs). However, the way abstractions are built and used in the context of Planning and MDPs is different even though lots of commonalities can be highlighted. To this day there is no work trying to relate and unify the two fields on the matter of abstractions unraveling all the different assumptions and their effect on the way they can be used. Therefore, in this paper we aim to do so by looking at projection abstractions in Planning through the lenses of discounted MDPs. Starting from a projection abstraction built according to Classical or Probabilistic Planning techniques, we will show how the same abstraction can be obtained under the abstraction frameworks available for discounted MDPs. Along the way, we will focus on computational as well as representational advantages and disadvantages of both worlds pointing out new research directions that are of interest for both fields.
Related papers
- Learning Abstract World Model for Value-preserving Planning with Options [11.254212901595523]
We leverage the structure of a given set of temporally-extended actions to learn abstract Markov decision processes (MDPs)
We characterize state abstractions necessary to ensure that planning with these skills, by simulating trajectories in the abstract MDP, results in policies with bounded value loss in the original MDP.
We evaluate our approach in goal-based navigation environments that require continuous abstract states to plan successfully and show that abstract model learning improves the sample efficiency of planning and learning.
arXiv Detail & Related papers (2024-06-22T13:41:02Z) - Learning Planning Abstractions from Language [28.855381137615275]
This paper presents a framework for learning state and action abstractions in sequential decision-making domains.
Our framework, planning abstraction from language (PARL), utilizes language-annotated demonstrations to automatically discover a symbolic and abstract action space.
arXiv Detail & Related papers (2024-05-06T21:24:22Z) - How to Handle Sketch-Abstraction in Sketch-Based Image Retrieval? [120.49126407479717]
We propose a sketch-based image retrieval framework capable of handling sketch abstraction at varied levels.
For granularity-level abstraction understanding, we dictate that the retrieval model should not treat all abstraction-levels equally.
Our Acc.@q loss uniquely allows a sketch to narrow/broaden its focus in terms of how stringent the evaluation should be.
arXiv Detail & Related papers (2024-03-11T23:08:29Z) - AbsPyramid: Benchmarking the Abstraction Ability of Language Models with a Unified Entailment Graph [62.685920585838616]
abstraction ability is essential in human intelligence, which remains under-explored in language models.
We present AbsPyramid, a unified entailment graph of 221K textual descriptions of abstraction knowledge.
arXiv Detail & Related papers (2023-11-15T18:11:23Z) - Ideal Abstractions for Decision-Focused Learning [108.15241246054515]
We propose a method that configures the output space automatically in order to minimize the loss of decision-relevant information.
We demonstrate the method in two domains: data acquisition for deep neural network training and a closed-loop wildfire management task.
arXiv Detail & Related papers (2023-03-29T23:31:32Z) - Does Deep Learning Learn to Abstract? A Systematic Probing Framework [69.2366890742283]
Abstraction is a desirable capability for deep learning models, which means to induce abstract concepts from concrete instances and flexibly apply them beyond the learning context.
We introduce a systematic probing framework to explore the abstraction capability of deep learning models from a transferability perspective.
arXiv Detail & Related papers (2023-02-23T12:50:02Z) - Inventing Relational State and Action Abstractions for Effective and
Efficient Bilevel Planning [26.715198108255162]
We develop a novel framework for learning state and action abstractions.
We learn relational, neuro-symbolic abstractions that generalize over object identities and numbers.
We show that our learned abstractions are able to quickly solve held-out tasks of longer horizons.
arXiv Detail & Related papers (2022-03-17T22:13:09Z) - MDP Abstraction with Successor Features [14.433551477386318]
We study abstraction in the context of reinforcement learning, in which agents may perform state or temporal abstractions.
In this work, we propose successor abstraction, a novel abstraction scheme building on successor features.
Our successor abstraction allows us to learn abstract environment models with semantics that are transferable across different environments.
arXiv Detail & Related papers (2021-10-18T11:35:08Z) - Learning Models as Functionals of Signed-Distance Fields for
Manipulation Planning [51.74463056899926]
This work proposes an optimization-based manipulation planning framework where the objectives are learned functionals of signed-distance fields that represent objects in the scene.
We show that representing objects as signed-distance fields not only enables to learn and represent a variety of models with higher accuracy compared to point-cloud and occupancy measure representations.
arXiv Detail & Related papers (2021-10-02T12:36:58Z) - Learning Abstract Models for Strategic Exploration and Fast Reward
Transfer [85.19766065886422]
We learn an accurate Markov Decision Process (MDP) over abstract states to avoid compounding errors.
Our approach achieves strong results on three of the hardest Arcade Learning Environment games.
We can reuse the learned abstract MDP for new reward functions, achieving higher reward in 1000x fewer samples than model-free methods trained from scratch.
arXiv Detail & Related papers (2020-07-12T03:33:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.