Real2Sim or Sim2Real: Robotics Visual Insertion using Deep Reinforcement
Learning and Real2Sim Policy Adaptation
- URL: http://arxiv.org/abs/2206.02679v1
- Date: Mon, 6 Jun 2022 15:27:25 GMT
- Title: Real2Sim or Sim2Real: Robotics Visual Insertion using Deep Reinforcement
Learning and Real2Sim Policy Adaptation
- Authors: Yiwen Chen, Xue Li, Sheng Guo, Xian Yao Ng, Marcelo Ang
- Abstract summary: In this work, we solve the insertion task using a pure visual reinforcement learning solution with minimum infrastructure requirement.
We also propose a novel sim2real strategy, Real2Sim, which provides a novel and easier solution in policy adaptation.
- Score: 8.992053371569678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning has shown a wide usage in robotics tasks, such as
insertion and grasping. However, without a practical sim2real strategy, the
policy trained in simulation could fail on the real task. There are also wide
researches in the sim2real strategies, but most of those methods rely on heavy
image rendering, domain randomization training, or tuning. In this work, we
solve the insertion task using a pure visual reinforcement learning solution
with minimum infrastructure requirement. We also propose a novel sim2real
strategy, Real2Sim, which provides a novel and easier solution in policy
adaptation. We discuss the advantage of Real2Sim compared with Sim2Real.
Related papers
- Overcoming the Sim-to-Real Gap: Leveraging Simulation to Learn to Explore for Real-World RL [25.991354823569033]
We show that in many regimes, while direct sim2real transfer may fail, we can utilize the simulator to learn a set of emphexploratory policies.
In particular, in the setting of low-rank MDPs, we show that coupling these exploratory policies with simple, practical approaches.
This is the first evidence that simulation transfer yields a provable gain in reinforcement learning in settings where direct sim2real transfer fails.
arXiv Detail & Related papers (2024-10-26T19:12:27Z) - DrEureka: Language Model Guided Sim-To-Real Transfer [64.14314476811806]
Transferring policies learned in simulation to the real world is a promising strategy for acquiring robot skills at scale.
In this paper, we investigate using Large Language Models (LLMs) to automate and accelerate sim-to-real design.
Our approach is capable of solving novel robot tasks, such as quadruped balancing and walking atop a yoga ball.
arXiv Detail & Related papers (2024-06-04T04:53:05Z) - TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction [25.36756787147331]
Learning in simulation and transferring the learned policy to the real world has the potential to enable generalist robots.
We propose a data-driven approach to enable successful sim-to-real transfer based on a human-in-the-loop framework.
We show that our approach can achieve successful sim-to-real transfer in complex and contact-rich manipulation tasks such as furniture assembly.
arXiv Detail & Related papers (2024-05-16T17:59:07Z) - Sim-and-Real Reinforcement Learning for Manipulation: A Consensus-based
Approach [4.684126055213616]
We propose a Consensus-based Sim-And-Real deep reinforcement learning algorithm (CSAR) for manipulator pick-and-place tasks.
We train the agents in simulators and the real world to get the optimal policies for both sim-and-real worlds.
arXiv Detail & Related papers (2023-02-26T22:27:23Z) - Sim2real Transfer Learning for Point Cloud Segmentation: An Industrial
Application Case on Autonomous Disassembly [55.41644538483948]
We present an industrial application case that uses sim2real transfer learning for point cloud data.
We provide insights on how to generate and process synthetic point cloud data.
A novel patch-based attention network is proposed additionally to tackle this problem.
arXiv Detail & Related papers (2023-01-12T14:00:37Z) - Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving
Without Real Data [56.49494318285391]
We present Sim2Seg, a re-imagining of RCAN that crosses the visual reality gap for off-road autonomous driving.
This is done by learning to translate randomized simulation images into simulated segmentation and depth maps.
This allows us to train an end-to-end RL policy in simulation, and directly deploy in the real-world.
arXiv Detail & Related papers (2022-10-25T17:50:36Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z) - Sim2Real for Peg-Hole Insertion with Eye-in-Hand Camera [58.720142291102135]
We use a simulator to learn the peg-hole insertion problem and then transfer the learned model to the real robot.
We show that the transferred policy, which only takes RGB-D and joint information (proprioception) can perform well on the real robot.
arXiv Detail & Related papers (2020-05-29T05:58:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.