Offline RL with Observation Histories: Analyzing and Improving Sample
Complexity
- URL: http://arxiv.org/abs/2310.20663v1
- Date: Tue, 31 Oct 2023 17:29:46 GMT
- Title: Offline RL with Observation Histories: Analyzing and Improving Sample
Complexity
- Authors: Joey Hong and Anca Dragan and Sergey Levine
- Abstract summary: offline reinforcement learning can synthesize more optimal behavior from a dataset consisting only of suboptimal trials.
We show that standard offline RL algorithms conditioned on observation histories suffer from poor sample complexity.
We propose that offline RL can explicitly optimize this loss to aid worst-case sample complexity.
- Score: 70.7884839812069
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Offline reinforcement learning (RL) can in principle synthesize more optimal
behavior from a dataset consisting only of suboptimal trials. One way that this
can happen is by "stitching" together the best parts of otherwise suboptimal
trajectories that overlap on similar states, to create new behaviors where each
individual state is in-distribution, but the overall returns are higher.
However, in many interesting and complex applications, such as autonomous
navigation and dialogue systems, the state is partially observed. Even worse,
the state representation is unknown or not easy to define. In such cases,
policies and value functions are often conditioned on observation histories
instead of states. In these cases, it is not clear if the same kind of
"stitching" is feasible at the level of observation histories, since two
different trajectories would always have different histories, and thus "similar
states" that might lead to effective stitching cannot be leveraged.
Theoretically, we show that standard offline RL algorithms conditioned on
observation histories suffer from poor sample complexity, in accordance with
the above intuition. We then identify sufficient conditions under which offline
RL can still be efficient -- intuitively, it needs to learn a compact
representation of history comprising only features relevant for action
selection. We introduce a bisimulation loss that captures the extent to which
this happens, and propose that offline RL can explicitly optimize this loss to
aid worst-case sample complexity. Empirically, we show that across a variety of
tasks either our proposed loss improves performance, or the value of this loss
is already minimized as a consequence of standard offline RL, indicating that
it correlates well with good performance.
Related papers
- Is Value Learning Really the Main Bottleneck in Offline RL? [70.54708989409409]
We show that the choice of a policy extraction algorithm significantly affects the performance and scalability of offline RL.
We propose two simple test-time policy improvement methods and show that these methods lead to better performance.
arXiv Detail & Related papers (2024-06-13T17:07:49Z) - Efficient Reinforcement Learning with Impaired Observability: Learning
to Act with Delayed and Missing State Observations [92.25604137490168]
This paper introduces a theoretical investigation into efficient reinforcement learning in control systems.
We present algorithms and establish near-optimal regret upper and lower bounds, of the form $tildemathcalO(sqrtrm poly(H) SAK)$, for RL in the delayed and missing observation settings.
arXiv Detail & Related papers (2023-06-02T02:46:39Z) - Leveraging Factored Action Spaces for Efficient Offline Reinforcement
Learning in Healthcare [38.42691031505782]
We propose a form of linear Q-function decomposition induced by factored action spaces.
Our approach can help an agent make more accurate inferences within underexplored regions of the state-action space.
arXiv Detail & Related papers (2023-05-02T19:13:10Z) - Value-Consistent Representation Learning for Data-Efficient
Reinforcement Learning [105.70602423944148]
We propose a novel method, called value-consistent representation learning (VCR), to learn representations that are directly related to decision-making.
Instead of aligning this imagined state with a real state returned by the environment, VCR applies a $Q$-value head on both states and obtains two distributions of action values.
It has been demonstrated that our methods achieve new state-of-the-art performance for search-free RL algorithms.
arXiv Detail & Related papers (2022-06-25T03:02:25Z) - Sample-Efficient Reinforcement Learning Is Feasible for Linearly
Realizable MDPs with Limited Revisiting [60.98700344526674]
Low-complexity models such as linear function representation play a pivotal role in enabling sample-efficient reinforcement learning.
In this paper, we investigate a new sampling protocol, which draws samples in an online/exploratory fashion but allows one to backtrack and revisit previous states in a controlled and infrequent manner.
We develop an algorithm tailored to this setting, achieving a sample complexity that scales practicallyly with the feature dimension, the horizon, and the inverse sub-optimality gap, but not the size of the state/action space.
arXiv Detail & Related papers (2021-05-17T17:22:07Z) - Interpretable performance analysis towards offline reinforcement
learning: A dataset perspective [6.526790418943535]
We propose a two-fold taxonomy for existing offline RL algorithms.
We explore the correlation between the performance of different types of algorithms and the distribution of actions under states.
We create a benchmark platform on the Atari domain, entitled easy go (RLEG), at an estimated cost of more than 0.3 million dollars.
arXiv Detail & Related papers (2021-05-12T07:17:06Z) - Instabilities of Offline RL with Pre-Trained Neural Representation [127.89397629569808]
In offline reinforcement learning (RL), we seek to utilize offline data to evaluate (or learn) policies in scenarios where the data are collected from a distribution that substantially differs from that of the target policy to be evaluated.
Recent theoretical advances have shown that such sample-efficient offline RL is indeed possible provided certain strong representational conditions hold.
This work studies these issues from an empirical perspective to gauge how stable offline RL methods are.
arXiv Detail & Related papers (2021-03-08T18:06:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.