Provable Representation with Efficient Planning for Partial Observable Reinforcement Learning
- URL: http://arxiv.org/abs/2311.12244v3
- Date: Tue, 11 Jun 2024 03:51:28 GMT
- Title: Provable Representation with Efficient Planning for Partial Observable Reinforcement Learning
- Authors: Hongming Zhang, Tongzheng Ren, Chenjun Xiao, Dale Schuurmans, Bo Dai,
- Abstract summary: In most real-world reinforcement learning applications, state information is only partially observable, which breaks the Markov decision process assumption.
We develop a representation-based perspective that leads to a coherent framework and tractable algorithmic approach for practical reinforcement learning from partial observations.
We empirically demonstrate the proposed algorithm can surpass state-of-the-art performance with partial observations across various benchmarks.
- Score: 74.67655210734338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In most real-world reinforcement learning applications, state information is only partially observable, which breaks the Markov decision process assumption and leads to inferior performance for algorithms that conflate observations with state. Partially Observable Markov Decision Processes (POMDPs), on the other hand, provide a general framework that allows for partial observability to be accounted for in learning, exploration and planning, but presents significant computational and statistical challenges. To address these difficulties, we develop a representation-based perspective that leads to a coherent framework and tractable algorithmic approach for practical reinforcement learning from partial observations. We provide a theoretical analysis for justifying the statistical efficiency of the proposed algorithm, and also empirically demonstrate the proposed algorithm can surpass state-of-the-art performance with partial observations across various benchmarks, advancing reliable reinforcement learning towards more practical applications.
Related papers
- Generalization bound for estimating causal effects from observational
network data [25.055822137402746]
We derive a generalization bound for causal effect estimation in network scenarios by exploiting 1) the reweighting schema based on joint propensity score and 2) the representation learning schema based on Integral Probability Metric (IPM)
Motivated by the analysis of the bound, we propose a weighting regression method based on the joint propensity score augmented with representation learning.
arXiv Detail & Related papers (2023-08-08T03:14:34Z) - Dynamic deep-reinforcement-learning algorithm in Partially Observed
Markov Decision Processes [6.729108277517129]
This study shows the benefit of action sequence inclusion in order to solve Partially Observable Markov Decision Process.
The developed algorithms showed enhanced robustness of controller performance against different types of external disturbances.
arXiv Detail & Related papers (2023-07-29T08:52:35Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Provably Efficient Representation Learning with Tractable Planning in
Low-Rank POMDP [81.00800920928621]
We study representation learning in partially observable Markov Decision Processes (POMDPs)
We first present an algorithm for decodable POMDPs that combines maximum likelihood estimation (MLE) and optimism in the face of uncertainty (OFU)
We then show how to adapt this algorithm to also work in the broader class of $gamma$-observable POMDPs.
arXiv Detail & Related papers (2023-06-21T16:04:03Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - RELAX: Representation Learning Explainability [10.831313203043514]
We propose RELAX, which is the first approach for attribution-based explanations of representations.
ReLAX explains representations by measuring similarities in the representation space between an input and masked out versions of itself.
We provide theoretical interpretations of RELAX and conduct a novel analysis of feature extractors trained using supervised and unsupervised learning.
arXiv Detail & Related papers (2021-12-19T14:51:31Z) - MICo: Learning improved representations via sampling-based state
similarity for Markov decision processes [18.829939056796313]
We present a new behavioural distance over the state space of a Markov decision process.
We demonstrate the use of this distance as an effective means of shaping the learnt representations of deep reinforcement learning agents.
arXiv Detail & Related papers (2021-06-03T14:24:12Z) - Functional Regularization for Representation Learning: A Unified
Theoretical Perspective [27.93916012334704]
Unsupervised and self-supervised learning approaches have become a crucial tool to learn representations for downstream prediction tasks.
We present a unifying perspective where several such approaches can be viewed as imposing a regularization on the representation via a learnable function using unlabeled data.
We propose a discriminative theoretical framework for analyzing the sample complexity of these approaches, which generalizes the framework of (Balcan and Blum, 2010) to allow learnable regularization functions.
arXiv Detail & Related papers (2020-08-06T04:06:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.