Strictly Batch Imitation Learning by Energy-based Distribution Matching
- URL: http://arxiv.org/abs/2006.14154v2
- Date: Thu, 14 Jan 2021 17:54:32 GMT
- Title: Strictly Batch Imitation Learning by Energy-based Distribution Matching
- Authors: Daniel Jarrett, Ioana Bica, Mihaela van der Schaar
- Abstract summary: Consider learning a policy purely on the basis of demonstrated behavior -- that is, with no access to reinforcement signals, no knowledge of transition dynamics, and no further interaction with the environment.
One solution is simply to retrofit existing algorithms for apprenticeship learning to work in the offline setting.
But such an approach leans heavily on off-policy evaluation or offline model estimation, and can be indirect and inefficient.
We argue that a good solution should be able to explicitly parameterize a policy, implicitly learn from rollout dynamics, and operate in an entirely offline fashion.
- Score: 104.33286163090179
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consider learning a policy purely on the basis of demonstrated behavior --
that is, with no access to reinforcement signals, no knowledge of transition
dynamics, and no further interaction with the environment. This *strictly batch
imitation learning* problem arises wherever live experimentation is costly,
such as in healthcare. One solution is simply to retrofit existing algorithms
for apprenticeship learning to work in the offline setting. But such an
approach leans heavily on off-policy evaluation or offline model estimation,
and can be indirect and inefficient. We argue that a good solution should be
able to explicitly parameterize a policy (i.e. respecting action conditionals),
implicitly learn from rollout dynamics (i.e. leveraging state marginals), and
-- crucially -- operate in an entirely offline fashion. To address this
challenge, we propose a novel technique by *energy-based distribution matching*
(EDM): By identifying parameterizations of the (discriminative) model of a
policy with the (generative) energy function for state distributions, EDM
yields a simple but effective solution that equivalently minimizes a divergence
between the occupancy measure for the demonstrator and a model thereof for the
imitator. Through experiments with application to control and healthcare
settings, we illustrate consistent performance gains over existing algorithms
for strictly batch imitation learning.
Related papers
- UNIQ: Offline Inverse Q-learning for Avoiding Undesirable Demonstrations [11.666700714916065]
We address the problem of offline learning a policy that avoids undesirable demonstrations.
We formulate the learning task as maximizing a statistical distance between the learning policy and the undesirable policy.
Our algorithm, UNIQ, tackles these challenges by building on the inverse Q-learning framework.
arXiv Detail & Related papers (2024-10-10T18:52:58Z) - Operator World Models for Reinforcement Learning [37.69110422996011]
Policy Mirror Descent (PMD) is a powerful and theoretically sound methodology for sequential decision-making.
It is not directly applicable to Reinforcement Learning (RL) due to the inaccessibility of explicit action-value functions.
We introduce a novel approach based on learning a world model of the environment using conditional mean embeddings.
arXiv Detail & Related papers (2024-06-28T12:05:47Z) - Efficient Imitation Learning with Conservative World Models [54.52140201148341]
We tackle the problem of policy learning from expert demonstrations without a reward function.
We re-frame imitation learning as a fine-tuning problem, rather than a pure reinforcement learning one.
arXiv Detail & Related papers (2024-05-21T20:53:18Z) - Projected Off-Policy Q-Learning (POP-QL) for Stabilizing Offline
Reinforcement Learning [57.83919813698673]
Projected Off-Policy Q-Learning (POP-QL) is a novel actor-critic algorithm that simultaneously reweights off-policy samples and constrains the policy to prevent divergence and reduce value-approximation error.
In our experiments, POP-QL not only shows competitive performance on standard benchmarks, but also out-performs competing methods in tasks where the data-collection policy is significantly sub-optimal.
arXiv Detail & Related papers (2023-11-25T00:30:58Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Scalable Bayesian Inverse Reinforcement Learning [93.27920030279586]
We introduce Approximate Variational Reward Imitation Learning (AVRIL)
Our method addresses the ill-posed nature of the inverse reinforcement learning problem.
Applying our method to real medical data alongside classic control simulations, we demonstrate Bayesian reward inference in environments beyond the scope of current methods.
arXiv Detail & Related papers (2021-02-12T12:32:02Z) - Guided Uncertainty-Aware Policy Optimization: Combining Learning and
Model-Based Strategies for Sample-Efficient Policy Learning [75.56839075060819]
Traditional robotic approaches rely on an accurate model of the environment, a detailed description of how to perform the task, and a robust perception system to keep track of the current state.
reinforcement learning approaches can operate directly from raw sensory inputs with only a reward signal to describe the task, but are extremely sample-inefficient and brittle.
In this work, we combine the strengths of model-based methods with the flexibility of learning-based methods to obtain a general method that is able to overcome inaccuracies in the robotics perception/actuation pipeline.
arXiv Detail & Related papers (2020-05-21T19:47:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.