Facilitating Sim-to-real by Intrinsic Stochasticity of Real-Time
Simulation in Reinforcement Learning for Robot Manipulation
- URL: http://arxiv.org/abs/2304.06056v2
- Date: Sun, 6 Aug 2023 12:16:35 GMT
- Title: Facilitating Sim-to-real by Intrinsic Stochasticity of Real-Time
Simulation in Reinforcement Learning for Robot Manipulation
- Authors: Ram Dershan, Amir M. Soufi Enayati, Zengjie Zhang, Dean Richert, and
Homayoun Najjaran
- Abstract summary: We investigate the properties of intrinsicity of real-time simulation (RT-IS) of off-the-shelf simulation software.
RT-IS requires less randomization, is not task-dependent, and achieves better generalizability than the conventional domain-randomization-powered agents.
- Score: 1.6686307101054858
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Simulation is essential to reinforcement learning (RL) before implementation
in the real world, especially for safety-critical applications like robot
manipulation. Conventionally, RL agents are sensitive to the discrepancies
between the simulation and the real world, known as the sim-to-real gap. The
application of domain randomization, a technique used to fill this gap, is
limited to the imposition of heuristic-randomized models. {We investigate the
properties of intrinsic stochasticity of real-time simulation (RT-IS) of
off-the-shelf simulation software and its potential to improve RL performance.
This improvement includes a higher tolerance to noise and model imprecision and
superiority to conventional domain randomization in terms of ease of use and
automation. Firstly, we conduct analytical studies to measure the correlation
of RT-IS with the utilization of computer hardware and validate its
comparability with the natural stochasticity of a physical robot. Then, we
exploit the RT-IS feature in the training of an RL agent. The simulation and
physical experiment results verify the feasibility and applicability of RT-IS
to robust agent training for robot manipulation tasks. The RT-IS-powered RL
agent outperforms conventional agents on robots with modeling uncertainties.
RT-IS requires less heuristic randomization, is not task-dependent, and
achieves better generalizability than the conventional
domain-randomization-powered agents. Our findings provide a new perspective on
the sim-to-real problem in practical applications like robot manipulation
tasks.
Related papers
- INSIGHT: Universal Neural Simulator for Analog Circuits Harnessing Autoregressive Transformers [13.94505840368669]
INSIGHT is an effective universal neural simulator in the analog front-end design automation loop.
It accurately predicts the performance metrics of analog circuits with just a few microseconds of inference time.
arXiv Detail & Related papers (2024-07-10T03:52:53Z) - Investigating the Robustness of Counterfactual Learning to Rank Models: A Reproducibility Study [61.64685376882383]
Counterfactual learning to rank (CLTR) has attracted extensive attention in the IR community for its ability to leverage massive logged user interaction data to train ranking models.
This paper investigates the robustness of existing CLTR models in complex and diverse situations.
We find that the DLA models and IPS-DCM show better robustness under various simulation settings than IPS-PBM and PRS with offline propensity estimation.
arXiv Detail & Related papers (2024-04-04T10:54:38Z) - Learning to navigate efficiently and precisely in real environments [14.52507964172957]
Embodied AI literature focuses on end-to-end agents trained in simulators like Habitat or AI-Thor.
In this work we explore end-to-end training of agents in simulation in settings which minimize the sim2real gap.
arXiv Detail & Related papers (2024-01-25T17:50:05Z) - Transfer of Reinforcement Learning-Based Controllers from Model- to
Hardware-in-the-Loop [1.8218298349840023]
Reinforcement Learning has great potential for autonomously training agents to perform complex control tasks.
To use RL effectively in embedded system function development, the generated agents must be able to handle real-world applications.
This work focuses on accelerating the training process of RL agents by combining Transfer Learning (TL) and X-in-the-Loop (XiL) simulation.
arXiv Detail & Related papers (2023-10-25T09:13:12Z) - SAM-RL: Sensing-Aware Model-Based Reinforcement Learning via
Differentiable Physics-Based Simulation and Rendering [49.78647219715034]
We propose a sensing-aware model-based reinforcement learning system called SAM-RL.
With the sensing-aware learning pipeline, SAM-RL allows a robot to select an informative viewpoint to monitor the task process.
We apply our framework to real world experiments for accomplishing three manipulation tasks: robotic assembly, tool manipulation, and deformable object manipulation.
arXiv Detail & Related papers (2022-10-27T05:30:43Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Sim2real for Reinforcement Learning Driven Next Generation Networks [4.29590751118341]
Reinforcement Learning (RL) models are regarded as the key to solving RAN-related multi-objective optimization problems.
One of the main reasons is the modelling gap between the simulation and the real environment, which could make the RL agent trained by simulation ill-equipped for the real environment.
This article brings to the fore the sim2real challenge within the context of Open RAN (O-RAN)
Several use cases are presented to exemplify and demonstrate failure modes of the simulations trained RL model in real environments.
arXiv Detail & Related papers (2022-06-08T12:40:24Z) - Robot Learning from Randomized Simulations: A Review [59.992761565399185]
Deep learning has caused a paradigm shift in robotics research, favoring methods that require large amounts of data.
State-of-the-art approaches learn in simulation where data generation is fast as well as inexpensive.
We focus on a technique named 'domain randomization' which is a method for learning from randomized simulations.
arXiv Detail & Related papers (2021-11-01T13:55:41Z) - Model-based Reinforcement Learning from Signal Temporal Logic
Specifications [0.17205106391379021]
We propose expressing desired high-level robot behavior using a formal specification language known as Signal Temporal Logic (STL) as an alternative to reward/cost functions.
The proposed algorithm is empirically evaluated on simulations of robotic system such as a pick-and-place robotic arm, and adaptive cruise control for autonomous vehicles.
arXiv Detail & Related papers (2020-11-10T07:31:47Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z) - RL-CycleGAN: Reinforcement Learning Aware Simulation-To-Real [74.45688231140689]
We introduce the RL-scene consistency loss for image translation, which ensures that the translation operation is invariant with respect to the Q-values associated with the image.
We obtain RL-CycleGAN, a new approach for simulation-to-real-world transfer for reinforcement learning.
arXiv Detail & Related papers (2020-06-16T08:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.