A Contact-Safe Reinforcement Learning Framework for Contact-Rich Robot
Manipulation
- URL: http://arxiv.org/abs/2207.13438v1
- Date: Wed, 27 Jul 2022 10:35:44 GMT
- Title: A Contact-Safe Reinforcement Learning Framework for Contact-Rich Robot
Manipulation
- Authors: Xiang Zhu, Shucheng Kang and Jianyu Chen
- Abstract summary: We propose a contact-safe reinforcement learning framework for contact-rich robot manipulation.
When the RL policy causes unexpected collisions between the robot arm and the environment, our framework is able to immediately detect the collision and ensure the contact force to be small.
Our method is able to keep the contact force small both in task space and joint space even when the policy is under unseen scenario with unexpected collision.
- Score: 5.0768619194124005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning shows great potential to solve complex contact-rich
robot manipulation tasks. However, the safety of using RL in the real world is
a crucial problem, since unexpected dangerous collisions might happen when the
RL policy is imperfect during training or in unseen scenarios. In this paper,
we propose a contact-safe reinforcement learning framework for contact-rich
robot manipulation, which maintains safety in both the task space and joint
space. When the RL policy causes unexpected collisions between the robot arm
and the environment, our framework is able to immediately detect the collision
and ensure the contact force to be small. Furthermore, the end-effector is
enforced to perform contact-rich tasks compliantly, while keeping robust to
external disturbances. We train the RL policy in simulation and transfer it to
the real robot. Real world experiments on robot wiping tasks show that our
method is able to keep the contact force small both in task space and joint
space even when the policy is under unseen scenario with unexpected collision,
while rejecting the disturbances on the main task.
Related papers
- RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - Learning Vision-based Pursuit-Evasion Robot Policies [54.52536214251999]
We develop a fully-observable robot policy that generates supervision for a partially-observable one.
We deploy our policy on a physical quadruped robot with an RGB-D camera on pursuit-evasion interactions in the wild.
arXiv Detail & Related papers (2023-08-30T17:59:05Z) - Safety Correction from Baseline: Towards the Risk-aware Policy in
Robotics via Dual-agent Reinforcement Learning [64.11013095004786]
We propose a dual-agent safe reinforcement learning strategy consisting of a baseline and a safe agent.
Such a decoupled framework enables high flexibility, data efficiency and risk-awareness for RL-based control.
The proposed method outperforms the state-of-the-art safe RL algorithms on difficult robot locomotion and manipulation tasks.
arXiv Detail & Related papers (2022-12-14T03:11:25Z) - Safe reinforcement learning of dynamic high-dimensional robotic tasks:
navigation, manipulation, interaction [31.553783147007177]
In reinforcement learning, safety is even more fundamental for exploring an environment without causing any damage.
This paper introduces a new formulation of safe exploration for reinforcement learning of various robotic tasks.
Our approach applies to a wide class of robotic platforms and enforces safety even under complex collision constraints learned from data.
arXiv Detail & Related papers (2022-09-27T11:23:49Z) - Protective Policy Transfer [37.897395735552706]
We introduce a policy transfer algorithm for adapting robot motor skills to novel scenarios.
Our algorithm trains two control policies: a task policy that is optimized to complete the task of interest, and a protective policy that is dedicated to keep the robot from unsafe events.
We evaluate our approach on four simulated robot locomotion problems and a 2D navigation problem.
arXiv Detail & Related papers (2020-12-11T22:10:54Z) - COCOI: Contact-aware Online Context Inference for Generalizable
Non-planar Pushing [87.7257446869134]
General contact-rich manipulation problems are long-standing challenges in robotics.
Deep reinforcement learning has shown great potential in solving robot manipulation tasks.
We propose COCOI, a deep RL method that encodes a context embedding of dynamics properties online.
arXiv Detail & Related papers (2020-11-23T08:20:21Z) - Uncertainty-aware Contact-safe Model-based Reinforcement Learning [17.10030262602653]
We present contact-safe Model-based Reinforcement Learning (MBRL) for robot applications that achieves contact-safe behaviors in the learning process.
We associate the probabilistic Model Predictive Control's (pMPC) control limits with the model uncertainty so that the allowed acceleration of controlled behavior is adjusted according to learning progress.
arXiv Detail & Related papers (2020-10-16T05:11:25Z) - Deep Reinforcement Learning for Contact-Rich Skills Using Compliant
Movement Primitives [0.0]
Further integration of industrial robots is hampered by their limited flexibility, adaptability and decision making skills.
We propose different pruning methods that facilitate convergence and generalization.
We demonstrate that the proposed method can learn insertion skills that are invariant to space, size, shape, and closely related scenarios.
arXiv Detail & Related papers (2020-08-30T17:29:43Z) - Sim2Real for Peg-Hole Insertion with Eye-in-Hand Camera [58.720142291102135]
We use a simulator to learn the peg-hole insertion problem and then transfer the learned model to the real robot.
We show that the transferred policy, which only takes RGB-D and joint information (proprioception) can perform well on the real robot.
arXiv Detail & Related papers (2020-05-29T05:58:54Z) - Meta-Reinforcement Learning for Robotic Industrial Insertion Tasks [70.56451186797436]
We study how to use meta-reinforcement learning to solve the bulk of the problem in simulation.
We demonstrate our approach by training an agent to successfully perform challenging real-world insertion tasks.
arXiv Detail & Related papers (2020-04-29T18:00:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.