Intervention Design for Effective Sim2Real Transfer
- URL: http://arxiv.org/abs/2012.02055v1
- Date: Thu, 3 Dec 2020 16:38:54 GMT
- Title: Intervention Design for Effective Sim2Real Transfer
- Authors: Melissa Mozifian, Amy Zhang, Joelle Pineau, and David Meger
- Abstract summary: This work addresses the recent success of domain randomization and data augmentation for the sim2real setting.
We explain this success through the lens of causal inference, positioning domain randomization and data augmentation as interventions on the environment.
- Score: 48.9711031777803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of this work is to address the recent success of domain
randomization and data augmentation for the sim2real setting. We explain this
success through the lens of causal inference, positioning domain randomization
and data augmentation as interventions on the environment which encourage
invariance to irrelevant features. Such interventions include visual
perturbations that have no effect on reward and dynamics. This encourages the
learning algorithm to be robust to these types of variations and learn to
attend to the true causal mechanisms for solving the task. This connection
leads to two key findings: (1) perturbations to the environment do not have to
be realistic, but merely show variation along dimensions that also vary in the
real world, and (2) use of an explicit invariance-inducing objective improves
generalization in sim2sim and sim2real transfer settings over just data
augmentation or domain randomization alone. We demonstrate the capability of
our method by performing zero-shot transfer of a robot arm reach task on a 7DoF
Jaco arm learning from pixel observations.
Related papers
- Dynamics as Prompts: In-Context Learning for Sim-to-Real System Identifications [23.94013806312391]
We propose a novel approach that dynamically adjusts simulation environment parameters online using in-context learning.
We validate our approach across two tasks: object scooping and table air hockey.
Our approach delivers efficient and smooth system identification, advancing the deployment of robots in dynamic real-world scenarios.
arXiv Detail & Related papers (2024-10-27T07:13:38Z) - Towards Open-World Mobile Manipulation in Homes: Lessons from the Neurips 2023 HomeRobot Open Vocabulary Mobile Manipulation Challenge [93.4434417387526]
We propose Open Vocabulary Mobile Manipulation as a key benchmark task for robotics.
We organized a NeurIPS 2023 competition featuring both simulation and real-world components to evaluate solutions to this task.
We detail the results and methodologies used, both in simulation and real-world settings.
arXiv Detail & Related papers (2024-07-09T15:15:01Z) - Domain Randomization for Sim2real Transfer of Automatically Generated
Grasping Datasets [0.0]
The present paper investigates how automatically generated grasps can be exploited in the real world.
More than 7000 reach-and-grasp trajectories have been generated with Quality-Diversity (QD) methods on 3 different arms and grippers, including parallel fingers and a dexterous hand, and tested in the real world.
A QD approach has finally been proposed for making grasps more robust to domain randomization, resulting in a transfer ratio of 84% on the Franka Research 3 arm.
arXiv Detail & Related papers (2023-10-06T18:26:09Z) - Robust Visual Sim-to-Real Transfer for Robotic Manipulation [79.66851068682779]
Learning visuomotor policies in simulation is much safer and cheaper than in the real world.
However, due to discrepancies between the simulated and real data, simulator-trained policies often fail when transferred to real robots.
One common approach to bridge the visual sim-to-real domain gap is domain randomization (DR)
arXiv Detail & Related papers (2023-07-28T05:47:24Z) - Attention-based Adversarial Appearance Learning of Augmented Pedestrians [49.25430012369125]
We propose a method to synthesize realistic data for the pedestrian recognition task.
Our approach utilizes an attention mechanism driven by an adversarial loss to learn domain discrepancies.
Our experiments confirm that the proposed adaptation method is robust to such discrepancies and reveals both visual realism and semantic consistency.
arXiv Detail & Related papers (2021-07-06T15:27:00Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - DIRL: Domain-Invariant Representation Learning for Sim-to-Real Transfer [2.119586259941664]
We present a domain-invariant representation learning (DIRL) algorithm to adapt deep models to the physical environment with a small amount of real data.
Experiments on digit domains yield state-of-the-art performance on challenging benchmarks.
arXiv Detail & Related papers (2020-11-15T17:39:01Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.