Imitation Learning from Observations under Transition Model Disparity
- URL: http://arxiv.org/abs/2204.11446v1
- Date: Mon, 25 Apr 2022 05:36:54 GMT
- Title: Imitation Learning from Observations under Transition Model Disparity
- Authors: Tanmay Gangwani, Yuan Zhou, Jian Peng
- Abstract summary: Learning to perform tasks by leveraging a dataset of expert observations (ILO) is an important paradigm for learning skills without access to the expert reward function or the expert actions.
Recent methods for scalable ILO utilize adversarial learning to match the state-transition distributions of the expert and the learner.
We propose an algorithm that trains an intermediary policy in the learner environment and uses it as a surrogate expert for the learner.
- Score: 22.456737935789103
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning to perform tasks by leveraging a dataset of expert observations,
also known as imitation learning from observations (ILO), is an important
paradigm for learning skills without access to the expert reward function or
the expert actions. We consider ILO in the setting where the expert and the
learner agents operate in different environments, with the source of the
discrepancy being the transition dynamics model. Recent methods for scalable
ILO utilize adversarial learning to match the state-transition distributions of
the expert and the learner, an approach that becomes challenging when the
dynamics are dissimilar. In this work, we propose an algorithm that trains an
intermediary policy in the learner environment and uses it as a surrogate
expert for the learner. The intermediary policy is learned such that the state
transitions generated by it are close to the state transitions in the expert
dataset. To derive a practical and scalable algorithm, we employ concepts from
prior work on estimating the support of a probability distribution. Experiments
using MuJoCo locomotion tasks highlight that our method compares favorably to
the baselines for ILO with transition dynamics mismatch.
Related papers
- From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning [62.54484062185869]
We introduce StepAgent, which utilizes step-wise reward to optimize the agent's reinforcement learning process.
We propose implicit-reward and inverse reinforcement learning techniques to facilitate agent reflection and policy adjustment.
arXiv Detail & Related papers (2024-11-06T10:35:11Z) - Latent-Predictive Empowerment: Measuring Empowerment without a Simulator [56.53777237504011]
We present Latent-Predictive Empowerment (LPE), an algorithm that can compute empowerment in a more practical manner.
LPE learns large skillsets by maximizing an objective that is a principled replacement for the mutual information between skills and states.
arXiv Detail & Related papers (2024-10-15T00:41:18Z) - Informed Meta-Learning [55.2480439325792]
Meta-learning and informed ML stand out as two approaches for incorporating prior knowledge into ML pipelines.
We formalise a hybrid paradigm, informed meta-learning, facilitating the incorporation of priors from unstructured knowledge representations.
We demonstrate the potential benefits of informed meta-learning in improving data efficiency, robustness to observational noise and task distribution shifts.
arXiv Detail & Related papers (2024-02-25T15:08:37Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Robust Visual Imitation Learning with Inverse Dynamics Representations [32.806294517277976]
We develop an inverse dynamics state representation learning objective to align the expert environment and the learning environment.
With the abstract state representation, we design an effective reward function, which thoroughly measures the similarity between behavior data and expert data.
Our approach can achieve a near-expert performance in most environments, and significantly outperforms the state-of-the-art visual IL methods and robust IL methods.
arXiv Detail & Related papers (2023-10-22T11:47:35Z) - Imitation Learning from Observation through Optimal Transport [25.398983671932154]
Imitation Learning from Observation (ILfO) is a setting in which a learner tries to imitate the behavior of an expert.
We show that existing methods can be simplified to generate a reward function without requiring learned models or adversarial learning.
We demonstrate the effectiveness of this simple approach on a variety of continuous control tasks and find that it surpasses the state of the art in the IlfO setting.
arXiv Detail & Related papers (2023-10-02T20:53:20Z) - Imitation from Observation With Bootstrapped Contrastive Learning [12.048166025000976]
Imitation from observation (IfO) is a learning paradigm that consists of training autonomous agents in a Markov Decision Process.
We present BootIfOL, an IfO algorithm that aims to learn a reward function that takes an agent trajectory and compares it to an expert.
We evaluate our approach on a variety of control tasks showing that we can train effective policies using a limited number of demonstrative trajectories.
arXiv Detail & Related papers (2023-02-13T17:32:17Z) - Feature-Based Interpretable Reinforcement Learning based on
State-Transition Models [3.883460584034766]
Growing concerns regarding the operational usage of AI models in the real-world has caused a surge of interest in explaining AI models' decisions to humans.
We propose a method for offering local explanations on risk in reinforcement learning.
arXiv Detail & Related papers (2021-05-14T23:43:11Z) - Off-Policy Imitation Learning from Observations [78.30794935265425]
Learning from Observations (LfO) is a practical reinforcement learning scenario from which many applications can benefit.
We propose a sample-efficient LfO approach that enables off-policy optimization in a principled manner.
Our approach is comparable with state-of-the-art locomotion in terms of both sample-efficiency and performance.
arXiv Detail & Related papers (2021-02-25T21:33:47Z) - On Data Efficiency of Meta-learning [17.739215706060605]
We study the often overlooked aspect of the modern meta-learning algorithms -- their data efficiency.
We introduce a new simple framework for evaluating meta-learning methods under a limit on the available supervision.
We propose active meta-learning, which incorporates active data selection into learning-to-learn, leading to better performance of all methods in the limited supervision regime.
arXiv Detail & Related papers (2021-01-30T01:44:12Z) - Meta-learning the Learning Trends Shared Across Tasks [123.10294801296926]
Gradient-based meta-learning algorithms excel at quick adaptation to new tasks with limited data.
Existing meta-learning approaches only depend on the current task information during the adaptation.
We propose a 'Path-aware' model-agnostic meta-learning approach.
arXiv Detail & Related papers (2020-10-19T08:06:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.