A Theory of Abstraction in Reinforcement Learning
- URL: http://arxiv.org/abs/2203.00397v1
- Date: Tue, 1 Mar 2022 12:46:28 GMT
- Title: A Theory of Abstraction in Reinforcement Learning
- Authors: David Abel
- Abstract summary: In this dissertation, I present a theory of abstraction in reinforcement learning.
I first offer three desiderata for functions that carry out the process of abstraction.
I then present a suite of new algorithms and analysis that clarify how agents can learn to abstract according to these desiderata.
- Score: 18.976500531441346
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement learning defines the problem facing agents that learn to make
good decisions through action and observation alone. To be effective problem
solvers, such agents must efficiently explore vast worlds, assign credit from
delayed feedback, and generalize to new experiences, all while making use of
limited data, computational resources, and perceptual bandwidth. Abstraction is
essential to all of these endeavors. Through abstraction, agents can form
concise models of their environment that support the many practices required of
a rational, adaptive decision maker. In this dissertation, I present a theory
of abstraction in reinforcement learning. I first offer three desiderata for
functions that carry out the process of abstraction: they should 1) preserve
representation of near-optimal behavior, 2) be learned and constructed
efficiently, and 3) lower planning or learning time. I then present a suite of
new algorithms and analysis that clarify how agents can learn to abstract
according to these desiderata. Collectively, these results provide a partial
path toward the discovery and use of abstraction that minimizes the complexity
of effective reinforcement learning.
Related papers
- Building Minimal and Reusable Causal State Abstractions for
Reinforcement Learning [63.58935783293342]
Causal Bisimulation Modeling (CBM) is a method that learns the causal relationships in the dynamics and reward functions for each task to derive a minimal, task-specific abstraction.
CBM's learned implicit dynamics models identify the underlying causal relationships and state abstractions more accurately than explicit ones.
arXiv Detail & Related papers (2024-01-23T05:43:15Z) - Neural Causal Abstractions [63.21695740637627]
We develop a new family of causal abstractions by clustering variables and their domains.
We show that such abstractions are learnable in practical settings through Neural Causal Models.
Our experiments support the theory and illustrate how to scale causal inferences to high-dimensional settings involving image data.
arXiv Detail & Related papers (2024-01-05T02:00:27Z) - AbsPyramid: Benchmarking the Abstraction Ability of Language Models with a Unified Entailment Graph [62.685920585838616]
abstraction ability is essential in human intelligence, which remains under-explored in language models.
We present AbsPyramid, a unified entailment graph of 221K textual descriptions of abstraction knowledge.
arXiv Detail & Related papers (2023-11-15T18:11:23Z) - Does Deep Learning Learn to Abstract? A Systematic Probing Framework [69.2366890742283]
Abstraction is a desirable capability for deep learning models, which means to induce abstract concepts from concrete instances and flexibly apply them beyond the learning context.
We introduce a systematic probing framework to explore the abstraction capability of deep learning models from a transferability perspective.
arXiv Detail & Related papers (2023-02-23T12:50:02Z) - Learning Dynamic Abstract Representations for Sample-Efficient
Reinforcement Learning [22.25237742815589]
In many real-world problems, the learning agent needs to learn a problem's abstractions and solution simultaneously.
This paper presents a novel top-down approach for constructing state abstractions while carrying out reinforcement learning.
arXiv Detail & Related papers (2022-10-04T23:05:43Z) - Towards Computing an Optimal Abstraction for Structural Causal Models [16.17846886492361]
We focus on the problem of learning abstractions.
We suggest a concrete measure of information loss, and we illustrate its contribution to learning new abstractions.
arXiv Detail & Related papers (2022-08-01T14:35:57Z) - Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon
Reasoning [120.38381203153159]
Reinforcement learning can train policies that effectively perform complex tasks.
For long-horizon tasks, the performance of these methods degrades with horizon, often necessitating reasoning over and composing lower-level skills.
We propose Value Function Spaces: a simple approach that produces such a representation by using the value functions corresponding to each lower-level skill.
arXiv Detail & Related papers (2021-11-04T22:46:16Z) - Learning Abstract Models for Strategic Exploration and Fast Reward
Transfer [85.19766065886422]
We learn an accurate Markov Decision Process (MDP) over abstract states to avoid compounding errors.
Our approach achieves strong results on three of the hardest Arcade Learning Environment games.
We can reuse the learned abstract MDP for new reward functions, achieving higher reward in 1000x fewer samples than model-free methods trained from scratch.
arXiv Detail & Related papers (2020-07-12T03:33:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.