OPIRL: Sample Efficient Off-Policy Inverse Reinforcement Learning via
Distribution Matching
- URL: http://arxiv.org/abs/2109.04307v1
- Date: Thu, 9 Sep 2021 14:32:26 GMT
- Title: OPIRL: Sample Efficient Off-Policy Inverse Reinforcement Learning via
Distribution Matching
- Authors: Hana Hoshino, Kei Ota, Asako Kanezaki, Rio Yokota
- Abstract summary: Inverse Reinforcement Learning (IRL) is attractive in scenarios where reward engineering can be tedious.
Prior IRL algorithms use on-policy transitions, which require intensive sampling from the current policy for stable and optimal performance.
We present Off-Policy Inverse Reinforcement Learning (OPIRL), which adopts off-policy data distribution instead of on-policy.
- Score: 12.335788185691916
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inverse Reinforcement Learning (IRL) is attractive in scenarios where reward
engineering can be tedious. However, prior IRL algorithms use on-policy
transitions, which require intensive sampling from the current policy for
stable and optimal performance. This limits IRL applications in the real world,
where environment interactions can become highly expensive. To tackle this
problem, we present Off-Policy Inverse Reinforcement Learning (OPIRL), which
(1) adopts off-policy data distribution instead of on-policy and enables
significant reduction of the number of interactions with the environment, (2)
learns a stationary reward function that is transferable with high
generalization capabilities on changing dynamics, and (3) leverages
mode-covering behavior for faster convergence. We demonstrate that our method
is considerably more sample efficient and generalizes to novel environments
through the experiments. Our method achieves better or comparable results on
policy performance baselines with significantly fewer interactions.
Furthermore, we empirically show that the recovered reward function generalizes
to different tasks where prior arts are prone to fail.
Related papers
- Rethinking Adversarial Inverse Reinforcement Learning: From the Angles of Policy Imitation and Transferable Reward Recovery [1.1394969272703013]
adversarial inverse reinforcement learning (AIRL) serves as a foundational approach to providing comprehensive and transferable task descriptions.
This paper reexamines AIRL in light of the unobservable transition matrix or limited informative priors.
We show that AIRL can disentangle rewards for effective transfer with high probability, irrespective of specific conditions.
arXiv Detail & Related papers (2024-10-10T06:21:32Z) - EvIL: Evolution Strategies for Generalisable Imitation Learning [33.745657379141676]
In imitation learning (IL) expert demonstrations and the environment we want to deploy our learned policy in aren't exactly the same.
Compared to policy-centric approaches to IL like cloning, reward-centric approaches like inverse reinforcement learning (IRL) often better replicate expert behaviour in new environments.
We find that modern deep IL algorithms frequently recover rewards which induce policies far weaker than the expert, even in the same environment the demonstrations were collected in.
We propose a novel evolution-strategies based method EvIL to optimise for a reward-shaping term that speeds up re-training in the target environment.
arXiv Detail & Related papers (2024-06-15T22:46:39Z) - Efficient Imitation Learning with Conservative World Models [54.52140201148341]
We tackle the problem of policy learning from expert demonstrations without a reward function.
We re-frame imitation learning as a fine-tuning problem, rather than a pure reinforcement learning one.
arXiv Detail & Related papers (2024-05-21T20:53:18Z) - Active Exploration for Inverse Reinforcement Learning [58.295273181096036]
We propose a novel IRL algorithm: Active exploration for Inverse Reinforcement Learning (AceIRL)
AceIRL actively explores an unknown environment and expert policy to quickly learn the expert's reward function and identify a good policy.
We empirically evaluate AceIRL in simulations and find that it significantly outperforms more naive exploration strategies.
arXiv Detail & Related papers (2022-07-18T14:45:55Z) - A Regularized Implicit Policy for Offline Reinforcement Learning [54.7427227775581]
offline reinforcement learning enables learning from a fixed dataset, without further interactions with the environment.
We propose a framework that supports learning a flexible yet well-regularized fully-implicit policy.
Experiments and ablation study on the D4RL dataset validate our framework and the effectiveness of our algorithmic designs.
arXiv Detail & Related papers (2022-02-19T20:22:04Z) - Towards Robust Bisimulation Metric Learning [3.42658286826597]
Bisimulation metrics offer one solution to representation learning problem.
We generalize value function approximation bounds for on-policy bisimulation metrics to non-optimal policies.
We find that these issues stem from an underconstrained dynamics model and an unstable dependence of the embedding norm on the reward signal.
arXiv Detail & Related papers (2021-10-27T00:32:07Z) - Online reinforcement learning with sparse rewards through an active
inference capsule [62.997667081978825]
This paper introduces an active inference agent which minimizes the novel free energy of the expected future.
Our model is capable of solving sparse-reward problems with a very high sample efficiency.
We also introduce a novel method for approximating the prior model from the reward function, which simplifies the expression of complex objectives.
arXiv Detail & Related papers (2021-06-04T10:03:36Z) - Off-Policy Imitation Learning from Observations [78.30794935265425]
Learning from Observations (LfO) is a practical reinforcement learning scenario from which many applications can benefit.
We propose a sample-efficient LfO approach that enables off-policy optimization in a principled manner.
Our approach is comparable with state-of-the-art locomotion in terms of both sample-efficiency and performance.
arXiv Detail & Related papers (2021-02-25T21:33:47Z) - Batch Reinforcement Learning with a Nonparametric Off-Policy Policy
Gradient [34.16700176918835]
Off-policy Reinforcement Learning holds the promise of better data efficiency.
Current off-policy policy gradient methods either suffer from high bias or high variance, delivering often unreliable estimates.
We propose a nonparametric Bellman equation, which can be solved in closed form.
arXiv Detail & Related papers (2020-10-27T13:40:06Z) - Deep Reinforcement Learning amidst Lifelong Non-Stationarity [67.24635298387624]
We show that an off-policy RL algorithm can reason about and tackle lifelong non-stationarity.
Our method leverages latent variable models to learn a representation of the environment from current and past experiences.
We also introduce several simulation environments that exhibit lifelong non-stationarity, and empirically find that our approach substantially outperforms approaches that do not reason about environment shift.
arXiv Detail & Related papers (2020-06-18T17:34:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.