Benchmarks and Algorithms for Offline Preference-Based Reward Learning
- URL: http://arxiv.org/abs/2301.01392v1
- Date: Tue, 3 Jan 2023 23:52:16 GMT
- Title: Benchmarks and Algorithms for Offline Preference-Based Reward Learning
- Authors: Daniel Shin, Anca D. Dragan, Daniel S. Brown
- Abstract summary: We propose an approach that uses an offline dataset to craft preference queries via pool-based active learning.
Our proposed approach does not require actual physical rollouts or an accurate simulator for either the reward learning or policy optimization steps.
- Score: 41.676208473752425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning a reward function from human preferences is challenging as it
typically requires having a high-fidelity simulator or using expensive and
potentially unsafe actual physical rollouts in the environment. However, in
many tasks the agent might have access to offline data from related tasks in
the same target environment. While offline data is increasingly being used to
aid policy optimization via offline RL, our observation is that it can be a
surprisingly rich source of information for preference learning as well. We
propose an approach that uses an offline dataset to craft preference queries
via pool-based active learning, learns a distribution over reward functions,
and optimizes a corresponding policy via offline RL. Crucially, our proposed
approach does not require actual physical rollouts or an accurate simulator for
either the reward learning or policy optimization steps. To test our approach,
we first evaluate existing offline RL benchmarks for their suitability for
offline reward learning. Surprisingly, for many offline RL domains, we find
that simply using a trivial reward function results good policy performance,
making these domains ill-suited for evaluating learned rewards. To address
this, we identify a subset of existing offline RL benchmarks that are well
suited for offline reward learning and also propose new offline apprenticeship
learning benchmarks which allow for more open-ended behaviors. When evaluated
on this curated set of domains, our empirical results suggest that combining
offline RL with learned human preferences can enable an agent to learn to
perform novel tasks that were not explicitly shown in the offline data.
Related papers
- Preference Elicitation for Offline Reinforcement Learning [59.136381500967744]
We propose Sim-OPRL, an offline preference-based reinforcement learning algorithm.
Our algorithm employs a pessimistic approach for out-of-distribution data, and an optimistic approach for acquiring informative preferences about the optimal policy.
arXiv Detail & Related papers (2024-06-26T15:59:13Z) - Is Value Learning Really the Main Bottleneck in Offline RL? [70.54708989409409]
We show that the choice of a policy extraction algorithm significantly affects the performance and scalability of offline RL.
We propose two simple test-time policy improvement methods and show that these methods lead to better performance.
arXiv Detail & Related papers (2024-06-13T17:07:49Z) - Adaptive Policy Learning for Offline-to-Online Reinforcement Learning [27.80266207283246]
We consider an offline-to-online setting where the agent is first learned from the offline dataset and then trained online.
We propose a framework called Adaptive Policy Learning for effectively taking advantage of offline and online data.
arXiv Detail & Related papers (2023-03-14T08:13:21Z) - Efficient Online Reinforcement Learning with Offline Data [78.92501185886569]
We show that we can simply apply existing off-policy methods to leverage offline data when learning online.
We extensively ablate these design choices, demonstrating the key factors that most affect performance.
We see that correct application of these simple recommendations can provide a $mathbf2.5times$ improvement over existing approaches.
arXiv Detail & Related papers (2023-02-06T17:30:22Z) - Offline Preference-Based Apprenticeship Learning [11.21888613165599]
We study how an offline dataset can be used to address two challenges that autonomous systems face when they endeavor to learn from, adapt to, and collaborate with humans.
First, we use the offline dataset to efficiently infer the human's reward function via pool-based active preference learning.
Second, given this learned reward function, we perform offline reinforcement learning to optimize a policy based on the inferred human intent.
arXiv Detail & Related papers (2021-07-20T04:15:52Z) - Offline Meta-Reinforcement Learning with Online Self-Supervision [66.42016534065276]
We propose a hybrid offline meta-RL algorithm, which uses offline data with rewards to meta-train an adaptive policy.
Our method uses the offline data to learn the distribution of reward functions, which is then sampled to self-supervise reward labels for the additional online data.
We find that using additional data and self-generated rewards significantly improves an agent's ability to generalize.
arXiv Detail & Related papers (2021-07-08T17:01:32Z) - Representation Matters: Offline Pretraining for Sequential Decision
Making [27.74988221252854]
In this paper, we consider a slightly different approach to incorporating offline data into sequential decision-making.
We find that the use of pretraining with unsupervised learning objectives can dramatically improve the performance of policy learning algorithms.
arXiv Detail & Related papers (2021-02-11T02:38:12Z) - OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement
Learning [107.6943868812716]
In many practical applications, the situation is reversed: an agent may have access to large amounts of undirected offline experience data, while access to the online environment is severely limited.
Our main insight is that, when presented with offline data composed of a variety of behaviors, an effective way to leverage this data is to extract a continuous space of recurring and temporally extended primitive behaviors.
In addition to benefiting offline policy optimization, we show that performing offline primitive learning in this way can also be leveraged for improving few-shot imitation learning.
arXiv Detail & Related papers (2020-10-26T14:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.