SAM-RL: Sensing-Aware Model-Based Reinforcement Learning via
Differentiable Physics-Based Simulation and Rendering
- URL: http://arxiv.org/abs/2210.15185v3
- Date: Tue, 23 May 2023 06:56:30 GMT
- Title: SAM-RL: Sensing-Aware Model-Based Reinforcement Learning via
Differentiable Physics-Based Simulation and Rendering
- Authors: Jun Lv, Yunhai Feng, Cheng Zhang, Shuang Zhao, Lin Shao, Cewu Lu
- Abstract summary: We propose a sensing-aware model-based reinforcement learning system called SAM-RL.
With the sensing-aware learning pipeline, SAM-RL allows a robot to select an informative viewpoint to monitor the task process.
We apply our framework to real world experiments for accomplishing three manipulation tasks: robotic assembly, tool manipulation, and deformable object manipulation.
- Score: 49.78647219715034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model-based reinforcement learning (MBRL) is recognized with the potential to
be significantly more sample-efficient than model-free RL. How an accurate
model can be developed automatically and efficiently from raw sensory inputs
(such as images), especially for complex environments and tasks, is a
challenging problem that hinders the broad application of MBRL in the real
world. In this work, we propose a sensing-aware model-based reinforcement
learning system called SAM-RL. Leveraging the differentiable physics-based
simulation and rendering, SAM-RL automatically updates the model by comparing
rendered images with real raw images and produces the policy efficiently. With
the sensing-aware learning pipeline, SAM-RL allows a robot to select an
informative viewpoint to monitor the task process. We apply our framework to
real world experiments for accomplishing three manipulation tasks: robotic
assembly, tool manipulation, and deformable object manipulation. We demonstrate
the effectiveness of SAM-RL via extensive experiments. Videos are available on
our project webpage at https://sites.google.com/view/rss-sam-rl.
Related papers
- ASID: Active Exploration for System Identification in Robotic Manipulation [32.27299045059514]
We propose a learning system that can leverage a small amount of real-world data to autonomously refine a simulation model and then plan an accurate control strategy.
We demonstrate the efficacy of this paradigm in identifying articulation, mass, and other physical parameters in several challenging robotic manipulation tasks.
arXiv Detail & Related papers (2024-04-18T16:35:38Z) - Active Exploration in Bayesian Model-based Reinforcement Learning for Robot Manipulation [8.940998315746684]
We propose a model-based reinforcement learning (RL) approach for robotic arm end-tasks.
We employ Bayesian neural network models to represent, in a probabilistic way, both the belief and information encoded in the dynamic model during exploration.
Our experiments show the advantages of our Bayesian model-based RL approach, with similar quality in the results than relevant alternatives.
arXiv Detail & Related papers (2024-04-02T11:44:37Z) - TWIST: Teacher-Student World Model Distillation for Efficient
Sim-to-Real Transfer [23.12048336150798]
This paper proposes TWIST (Teacher-Student World Model Distillation for Sim-to-Real Transfer) to achieve efficient sim-to-real transfer of vision-based model-based RL.
Specifically, TWIST leverages state observations as readily accessible, privileged information commonly garnered from a simulator to significantly accelerate sim-to-real transfer.
arXiv Detail & Related papers (2023-11-07T00:18:07Z) - Sim-to-Real Deep Reinforcement Learning with Manipulators for
Pick-and-place [1.7478203318226313]
When transferring a Deep Reinforcement Learning model from simulation to the real world, the performance could be unsatisfactory.
This paper proposes a self-supervised vision-based DRL method that allows robots to pick and place objects effectively.
arXiv Detail & Related papers (2023-09-17T11:51:18Z) - Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from
Offline Data [101.43350024175157]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.
Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.
We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - Hindsight States: Blending Sim and Real Task Elements for Efficient
Reinforcement Learning [61.3506230781327]
In robotics, one approach to generate training data builds on simulations based on dynamics models derived from first principles.
Here, we leverage the imbalance in complexity of the dynamics to learn more sample-efficiently.
We validate our method on several challenging simulated tasks and demonstrate that it improves learning both alone and when combined with an existing hindsight algorithm.
arXiv Detail & Related papers (2023-03-03T21:55:04Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z) - MELD: Meta-Reinforcement Learning from Images via Latent State Models [109.1664295663325]
We develop an algorithm for meta-RL from images that performs inference in a latent state model to quickly acquire new skills.
MELD is the first meta-RL algorithm trained in a real-world robotic control setting from images.
arXiv Detail & Related papers (2020-10-26T23:50:30Z) - RL-CycleGAN: Reinforcement Learning Aware Simulation-To-Real [74.45688231140689]
We introduce the RL-scene consistency loss for image translation, which ensures that the translation operation is invariant with respect to the Q-values associated with the image.
We obtain RL-CycleGAN, a new approach for simulation-to-real-world transfer for reinforcement learning.
arXiv Detail & Related papers (2020-06-16T08:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.