Cross-Domain Imitation Learning via Optimal Transport
- URL: http://arxiv.org/abs/2110.03684v1
- Date: Thu, 7 Oct 2021 17:59:49 GMT
- Title: Cross-Domain Imitation Learning via Optimal Transport
- Authors: Arnaud Fickinger, Samuel Cohen, Stuart Russell, Brandon Amos
- Abstract summary: Cross-domain imitation learning studies how to leverage expert demonstrations of one agent to train an imitation agent with a different embodiment or morphology.
We propose Gromov-Wasserstein Imitation Learning (GWIL), a method for cross-domain imitation that uses the Gromov-Wasserstein distance to align and compare states between the different spaces of the agents.
- Score: 12.221297423161502
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cross-domain imitation learning studies how to leverage expert demonstrations
of one agent to train an imitation agent with a different embodiment or
morphology. Comparing trajectories and stationary distributions between the
expert and imitation agents is challenging because they live on different
systems that may not even have the same dimensionality. We propose
Gromov-Wasserstein Imitation Learning (GWIL), a method for cross-domain
imitation that uses the Gromov-Wasserstein distance to align and compare states
between the different spaces of the agents. Our theory formally characterizes
the scenarios where GWIL preserves optimality, revealing its possibilities and
limitations. We demonstrate the effectiveness of GWIL in non-trivial continuous
control domains ranging from simple rigid transformation of the expert domain
to arbitrary transformation of the state-action space.
Related papers
- xTED: Cross-Domain Adaptation via Diffusion-Based Trajectory Editing [21.37585797507323]
Cross-domain policy transfer methods mostly aim at learning domain correspondences or corrections to facilitate policy learning.
We propose the Cross-Domain Trajectory EDiting framework that employs a specially designed diffusion model for cross-domain trajectory adaptation.
Our proposed model architecture effectively captures the intricate dependencies among states, actions, and rewards, as well as the dynamics patterns within target data.
arXiv Detail & Related papers (2024-09-13T10:07:28Z) - Cross-Domain Policy Transfer by Representation Alignment via Multi-Domain Behavioral Cloning [13.674493608667627]
We present a simple approach for cross-domain policy transfer that learns a shared latent representation across domains and a common abstract policy on top of it.
Our approach leverages multi-domain behavioral cloning on unaligned trajectories of proxy tasks and employs maximum mean discrepancy (MMD) as a regularization term to encourage cross-domain alignment.
arXiv Detail & Related papers (2024-07-24T00:13:00Z) - Investigating the potential of Sparse Mixtures-of-Experts for multi-domain neural machine translation [59.41178047749177]
We focus on multi-domain Neural Machine Translation, with the goal of developing efficient models which can handle data from various domains seen during training and are robust to domains unseen during training.
We hypothesize that Sparse Mixture-of-Experts (SMoE) models are a good fit for this task, as they enable efficient model scaling.
We conduct a series of experiments aimed at validating the utility of SMoE for the multi-domain scenario, and find that a straightforward width scaling of Transformer is a simpler and surprisingly more efficient approach in practice, and reaches the same performance level as SMoE.
arXiv Detail & Related papers (2024-07-01T09:45:22Z) - Towards Full-scene Domain Generalization in Multi-agent Collaborative Bird's Eye View Segmentation for Connected and Autonomous Driving [49.03947018718156]
We propose a unified domain generalization framework to be utilized during the training and inference stages of collaborative perception.
We also introduce an intra-system domain alignment mechanism to reduce or potentially eliminate the domain discrepancy among connected and autonomous vehicles.
arXiv Detail & Related papers (2023-11-28T12:52:49Z) - Learning Representative Trajectories of Dynamical Systems via
Domain-Adaptive Imitation [0.0]
We propose DATI, a deep reinforcement learning agent designed for domain-adaptive trajectory imitation.
Our experiments show that DATI outperforms baseline methods for imitation learning and optimal control in this setting.
Its generalization to a real-world scenario is shown through the discovery of abnormal motion patterns in maritime traffic.
arXiv Detail & Related papers (2023-04-19T15:53:48Z) - Learn what matters: cross-domain imitation learning with task-relevant
embeddings [77.34726150561087]
We study how an autonomous agent learns to perform a task from demonstrations in a different domain, such as a different environment or different agent.
We propose a scalable framework that enables cross-domain imitation learning without access to additional demonstrations or further domain knowledge.
arXiv Detail & Related papers (2022-09-24T21:56:58Z) - Variational Transfer Learning using Cross-Domain Latent Modulation [1.9662978733004601]
We introduce a novel cross-domain latent modulation mechanism to a variational autoencoder framework so as to achieve effective transfer learning.
Deep representations of the source and target domains are first extracted by a unified inference model and aligned by employing gradient reversal.
The learned deep representations are then cross-modulated to the latent encoding of the alternative domain, where consistency constraints are also applied.
arXiv Detail & Related papers (2022-05-31T03:47:08Z) - Adaptive Trajectory Prediction via Transferable GNN [74.09424229172781]
We propose a novel Transferable Graph Neural Network (T-GNN) framework, which jointly conducts trajectory prediction as well as domain alignment in a unified framework.
Specifically, a domain invariant GNN is proposed to explore the structural motion knowledge where the domain specific knowledge is reduced.
An attention-based adaptive knowledge learning module is further proposed to explore fine-grained individual-level feature representation for knowledge transfer.
arXiv Detail & Related papers (2022-03-09T21:08:47Z) - Cross-domain Imitation from Observations [50.669343548588294]
Imitation learning seeks to circumvent the difficulty in designing proper reward functions for training agents by utilizing expert behavior.
In this paper, we study the problem of how to imitate tasks when there exist discrepancies between the expert and agent MDP.
We present a novel framework to learn correspondences across such domains.
arXiv Detail & Related papers (2021-05-20T21:08:25Z) - Off-Dynamics Reinforcement Learning: Training for Transfer with Domain
Classifiers [138.68213707587822]
We propose a simple, practical, and intuitive approach for domain adaptation in reinforcement learning.
We show that we can achieve this goal by compensating for the difference in dynamics by modifying the reward function.
Our approach is applicable to domains with continuous states and actions and does not require learning an explicit model of the dynamics.
arXiv Detail & Related papers (2020-06-24T17:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.