Discriminator-Weighted Offline Imitation Learning from Suboptimal
Demonstrations
- URL: http://arxiv.org/abs/2207.10050v1
- Date: Wed, 20 Jul 2022 17:29:04 GMT
- Title: Discriminator-Weighted Offline Imitation Learning from Suboptimal
Demonstrations
- Authors: Haoran Xu, Xianyuan Zhan, Honglei Yin, Huiling Qin
- Abstract summary: We study the problem of offline Learning (IL) where an agent aims to learn an optimal expert behavior policy without additional online environment interactions.
We introduce an additional discriminator to distinguish expert and non-expert data.
Our proposed algorithm achieves higher returns and faster training speed compared to baseline algorithms.
- Score: 5.760034336327491
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We study the problem of offline Imitation Learning (IL) where an agent aims
to learn an optimal expert behavior policy without additional online
environment interactions. Instead, the agent is provided with a supplementary
offline dataset from suboptimal behaviors. Prior works that address this
problem either require that expert data occupies the majority proportion of the
offline dataset, or need to learn a reward function and perform offline
reinforcement learning (RL) afterwards. In this paper, we aim to address the
problem without additional steps of reward learning and offline RL training for
the case when demonstrations contain a large proportion of suboptimal data.
Built upon behavioral cloning (BC), we introduce an additional discriminator to
distinguish expert and non-expert data. We propose a cooperation framework to
boost the learning of both tasks, Based on this framework, we design a new IL
algorithm, where the outputs of discriminator serve as the weights of the BC
loss. Experimental results show that our proposed algorithm achieves higher
returns and faster training speed compared to baseline algorithms.
Related papers
- Leveraging Skills from Unlabeled Prior Data for Efficient Online Exploration [54.8229698058649]
We study how unlabeled prior trajectory data can be leveraged to learn efficient exploration strategies.
Our method SUPE (Skills from Unlabeled Prior data for Exploration) demonstrates that a careful combination of these ideas compounds their benefits.
We empirically show that SUPE reliably outperforms prior strategies, successfully solving a suite of long-horizon, sparse-reward tasks.
arXiv Detail & Related papers (2024-10-23T17:58:45Z) - Understanding the performance gap between online and offline alignment algorithms [63.137832242488926]
We show that offline algorithms train policy to become good at pairwise classification, while online algorithms are good at generations.
This hints at a unique interplay between discriminative and generative capabilities, which is greatly impacted by the sampling process.
Our study sheds light on the pivotal role of on-policy sampling in AI alignment, and hints at certain fundamental challenges of offline alignment algorithms.
arXiv Detail & Related papers (2024-05-14T09:12:30Z) - Efficient Online Learning with Offline Datasets for Infinite Horizon
MDPs: A Bayesian Approach [25.77911741149966]
We show that if the learning agent models the behavioral policy used by the expert, it can do substantially better in terms of minimizing cumulative regret.
We then propose the Informed RLSVI algorithm to efficiently approximate the iPSRL algorithm.
arXiv Detail & Related papers (2023-10-17T19:01:08Z) - Bridging Imitation and Online Reinforcement Learning: An Optimistic Tale [27.02990488317357]
Given an offline demonstration dataset from an imperfect expert, what is the best way to leverage it to bootstrap online learning performance in MDPs?
We first propose an Informed Posterior Sampling-based RL (iPSRL) algorithm that uses the offline dataset, and information about the expert's behavioral policy used to generate the offline dataset.
Since this algorithm is computationally impractical, we then propose the iRLSVI algorithm that can be seen as a combination of the RLSVI algorithm for online RL, and imitation learning.
arXiv Detail & Related papers (2023-03-20T18:16:25Z) - Efficient Online Reinforcement Learning with Offline Data [78.92501185886569]
We show that we can simply apply existing off-policy methods to leverage offline data when learning online.
We extensively ablate these design choices, demonstrating the key factors that most affect performance.
We see that correct application of these simple recommendations can provide a $mathbf2.5times$ improvement over existing approaches.
arXiv Detail & Related papers (2023-02-06T17:30:22Z) - Identifying Expert Behavior in Offline Training Datasets Improves
Behavioral Cloning of Robotic Manipulation Policies [15.383102120417407]
This paper presents our solution for the Real Robot Challenge III, a competition featured in the NeurIPS 2022 Competition Track.
It aimed at addressing dexterous robotic manipulation tasks through learning from pre-collected offline data.
arXiv Detail & Related papers (2023-01-30T15:55:02Z) - When Should We Prefer Offline Reinforcement Learning Over Behavioral
Cloning? [86.43517734716606]
offline reinforcement learning (RL) algorithms can acquire effective policies by utilizing previously collected experience, without any online interaction.
behavioral cloning (BC) algorithms mimic a subset of the dataset via supervised learning.
We show that policies trained on sufficiently noisy suboptimal data can attain better performance than even BC algorithms with expert data.
arXiv Detail & Related papers (2022-04-12T08:25:34Z) - Retrieval-Augmented Reinforcement Learning [63.32076191982944]
We train a network to map a dataset of past experiences to optimal behavior.
The retrieval process is trained to retrieve information from the dataset that may be useful in the current context.
We show that retrieval-augmented R2D2 learns significantly faster than the baseline R2D2 agent and achieves higher scores.
arXiv Detail & Related papers (2022-02-17T02:44:05Z) - Representation Matters: Offline Pretraining for Sequential Decision
Making [27.74988221252854]
In this paper, we consider a slightly different approach to incorporating offline data into sequential decision-making.
We find that the use of pretraining with unsupervised learning objectives can dramatically improve the performance of policy learning algorithms.
arXiv Detail & Related papers (2021-02-11T02:38:12Z) - Provably Efficient Causal Reinforcement Learning with Confounded
Observational Data [135.64775986546505]
We study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting.
We propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner.
arXiv Detail & Related papers (2020-06-22T14:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.