Off-Policy Imitation Learning from Observations
- URL: http://arxiv.org/abs/2102.13185v1
- Date: Thu, 25 Feb 2021 21:33:47 GMT
- Title: Off-Policy Imitation Learning from Observations
- Authors: Zhuangdi Zhu, Kaixiang Lin, Bo Dai, Jiayu Zhou
- Abstract summary: Learning from Observations (LfO) is a practical reinforcement learning scenario from which many applications can benefit.
We propose a sample-efficient LfO approach that enables off-policy optimization in a principled manner.
Our approach is comparable with state-of-the-art locomotion in terms of both sample-efficiency and performance.
- Score: 78.30794935265425
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Learning from Observations (LfO) is a practical reinforcement learning
scenario from which many applications can benefit through the reuse of
incomplete resources. Compared to conventional imitation learning (IL), LfO is
more challenging because of the lack of expert action guidance. In both
conventional IL and LfO, distribution matching is at the heart of their
foundation. Traditional distribution matching approaches are sample-costly
which depend on on-policy transitions for policy learning. Towards
sample-efficiency, some off-policy solutions have been proposed, which,
however, either lack comprehensive theoretical justifications or depend on the
guidance of expert actions. In this work, we propose a sample-efficient LfO
approach that enables off-policy optimization in a principled manner. To
further accelerate the learning procedure, we regulate the policy update with
an inverse action model, which assists distribution matching from the
perspective of mode-covering. Extensive empirical results on challenging
locomotion tasks indicate that our approach is comparable with state-of-the-art
in terms of both sample-efficiency and asymptotic performance.
Related papers
- Multi-Agent Reinforcement Learning from Human Feedback: Data Coverage and Algorithmic Techniques [65.55451717632317]
We study Multi-Agent Reinforcement Learning from Human Feedback (MARLHF), exploring both theoretical foundations and empirical validations.
We define the task as identifying Nash equilibrium from a preference-only offline dataset in general-sum games.
Our findings underscore the multifaceted approach required for MARLHF, paving the way for effective preference-based multi-agent systems.
arXiv Detail & Related papers (2024-09-01T13:14:41Z) - Preference-Guided Reinforcement Learning for Efficient Exploration [7.83845308102632]
We introduce LOPE: Learning Online with trajectory Preference guidancE, an end-to-end preference-guided RL framework.
Our intuition is that LOPE directly adjusts the focus of online exploration by considering human feedback as guidance.
LOPE outperforms several state-of-the-art methods regarding convergence rate and overall performance.
arXiv Detail & Related papers (2024-07-09T02:11:12Z) - Mimicking Better by Matching the Approximate Action Distribution [48.95048003354255]
We introduce MAAD, a novel, sample-efficient on-policy algorithm for Imitation Learning from Observations.
We show that it requires considerable fewer interactions to achieve expert performance, outperforming current state-of-the-art on-policy methods.
arXiv Detail & Related papers (2023-06-16T12:43:47Z) - Imitation Learning by State-Only Distribution Matching [2.580765958706854]
Imitation Learning from observation describes policy learning in a similar way to human learning.
We propose a non-adversarial learning-from-observations approach, together with an interpretable convergence and performance metric.
arXiv Detail & Related papers (2022-02-09T08:38:50Z) - Deterministic and Discriminative Imitation (D2-Imitation): Revisiting
Adversarial Imitation for Sample Efficiency [61.03922379081648]
We propose an off-policy sample efficient approach that requires no adversarial training or min-max optimization.
Our empirical results show that D2-Imitation is effective in achieving good sample efficiency, outperforming several off-policy extension approaches of adversarial imitation.
arXiv Detail & Related papers (2021-12-11T19:36:19Z) - Off-policy Reinforcement Learning with Optimistic Exploration and
Distribution Correction [73.77593805292194]
We train a separate exploration policy to maximize an approximate upper confidence bound of the critics in an off-policy actor-critic framework.
To mitigate the off-policy-ness, we adapt the recently introduced DICE framework to learn a distribution correction ratio for off-policy actor-critic training.
arXiv Detail & Related papers (2021-10-22T22:07:51Z) - Reward-Conditioned Policies [100.64167842905069]
imitation learning requires near-optimal expert data.
Can we learn effective policies via supervised learning without demonstrations?
We show how such an approach can be derived as a principled method for policy search.
arXiv Detail & Related papers (2019-12-31T18:07:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.