Sample-efficient Adversarial Imitation Learning
- URL: http://arxiv.org/abs/2303.07846v2
- Date: Tue, 23 Jan 2024 16:14:46 GMT
- Title: Sample-efficient Adversarial Imitation Learning
- Authors: Dahuin Jung, Hyungyu Lee, Sungroh Yoon
- Abstract summary: We propose a self-supervised representation-based adversarial imitation learning method to learn state and action representations.
We show a 39% relative improvement over existing adversarial imitation learning methods on MuJoCo in a setting limited to 100 expert state-action pairs.
- Score: 45.400080101596956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Imitation learning, in which learning is performed by demonstration, has been
studied and advanced for sequential decision-making tasks in which a reward
function is not predefined. However, imitation learning methods still require
numerous expert demonstration samples to successfully imitate an expert's
behavior. To improve sample efficiency, we utilize self-supervised
representation learning, which can generate vast training signals from the
given data. In this study, we propose a self-supervised representation-based
adversarial imitation learning method to learn state and action representations
that are robust to diverse distortions and temporally predictive, on non-image
control tasks. In particular, in comparison with existing self-supervised
learning methods for tabular data, we propose a different corruption method for
state and action representations that is robust to diverse distortions. We
theoretically and empirically observe that making an informative feature
manifold with less sample complexity significantly improves the performance of
imitation learning. The proposed method shows a 39% relative improvement over
existing adversarial imitation learning methods on MuJoCo in a setting limited
to 100 expert state-action pairs. Moreover, we conduct comprehensive ablations
and additional experiments using demonstrations with varying optimality to
provide insights into a range of factors.
Related papers
- "Give Me an Example Like This": Episodic Active Reinforcement Learning from Demonstrations [3.637365301757111]
Methods like Reinforcement Learning from Expert Demonstrations (RLED) introduce external expert demonstrations to facilitate agent exploration during the learning process.
How to select the best set of human demonstrations that is most beneficial for learning becomes a major concern.
This paper presents EARLY, an algorithm that enables a learning agent to generate optimized queries of expert demonstrations in a trajectory-based feature space.
arXiv Detail & Related papers (2024-06-05T08:52:21Z) - Visual Imitation Learning with Calibrated Contrastive Representation [44.63125396964309]
Adversarial Imitation Learning (AIL) allows the agent to reproduce expert behavior with low-dimensional states and actions.
This paper proposes a simple and effective solution by incorporating contrastive representative learning into visual AIL framework.
arXiv Detail & Related papers (2024-01-21T04:18:30Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Skill Disentanglement for Imitation Learning from Suboptimal
Demonstrations [60.241144377865716]
We consider the imitation of sub-optimal demonstrations, with both a small clean demonstration set and a large noisy set.
We propose method by evaluating and imitating at the sub-demonstration level, encoding action primitives of varying quality into different skills.
arXiv Detail & Related papers (2023-06-13T17:24:37Z) - Imitating, Fast and Slow: Robust learning from demonstrations via
decision-time planning [96.72185761508668]
Planning at Test-time (IMPLANT) is a new meta-algorithm for imitation learning.
We demonstrate that IMPLANT significantly outperforms benchmark imitation learning approaches on standard control environments.
arXiv Detail & Related papers (2022-04-07T17:16:52Z) - Imitation Learning by State-Only Distribution Matching [2.580765958706854]
Imitation Learning from observation describes policy learning in a similar way to human learning.
We propose a non-adversarial learning-from-observations approach, together with an interpretable convergence and performance metric.
arXiv Detail & Related papers (2022-02-09T08:38:50Z) - Visual Adversarial Imitation Learning using Variational Models [60.69745540036375]
Reward function specification remains a major impediment for learning behaviors through deep reinforcement learning.
Visual demonstrations of desired behaviors often presents an easier and more natural way to teach agents.
We develop a variational model-based adversarial imitation learning algorithm.
arXiv Detail & Related papers (2021-07-16T00:15:18Z) - Learning Invariant Representation for Continual Learning [5.979373021392084]
A key challenge in Continual learning is catastrophically forgetting previously learned tasks when the agent faces a new one.
We propose a new pseudo-rehearsal-based method, named learning Invariant Representation for Continual Learning (IRCL)
Disentangling the shared invariant representation helps to learn continually a sequence of tasks, while being more robust to forgetting and having better knowledge transfer.
arXiv Detail & Related papers (2021-01-15T15:12:51Z) - Provable Representation Learning for Imitation Learning via Bi-level
Optimization [60.059520774789654]
A common strategy in modern learning systems is to learn a representation that is useful for many tasks.
We study this strategy in the imitation learning setting for Markov decision processes (MDPs) where multiple experts' trajectories are available.
We instantiate this framework for the imitation learning settings of behavior cloning and observation-alone.
arXiv Detail & Related papers (2020-02-24T21:03:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.