A Learning Approach to Robot-Agnostic Force-Guided High Precision
Assembly
- URL: http://arxiv.org/abs/2010.08052v3
- Date: Mon, 2 Aug 2021 12:48:47 GMT
- Title: A Learning Approach to Robot-Agnostic Force-Guided High Precision
Assembly
- Authors: Jieliang Luo and Hui Li
- Abstract summary: We propose a learning approach to high-precision robotic assembly problems.
We focus on the contact-rich phase, where the assembly pieces are in close contact with each other.
Our training environment is robotless, as the end-effector is not attached to any specific robot.
- Score: 6.062589413216726
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this work we propose a learning approach to high-precision robotic
assembly problems. We focus on the contact-rich phase, where the assembly
pieces are in close contact with each other. Unlike many learning-based
approaches that heavily rely on vision or spatial tracking, our approach takes
force/torque in task space as the only observation. Our training environment is
robotless, as the end-effector is not attached to any specific robot. Trained
policies can then be applied to different robotic arms without re-training.
This approach can greatly reduce complexity to perform contact-rich robotic
assembly in the real world, especially in unstructured settings such as in
architectural construction. To achieve it, we have developed a new distributed
RL agent, named Recurrent Distributed DDPG (RD2), which extends Ape-X DDPG with
recurrency and makes two structural improvements on prioritized experience
replay. Our results show that RD2 is able to solve two fundamental
high-precision assembly tasks, lap-joint and peg-in-hole, and outperforms two
state-of-the-art algorithms, Ape-X DDPG and PPO with LSTM. We have successfully
evaluated our robot-agnostic policies on three robotic arms, Kuka KR60, Franka
Panda, and UR10, in simulation. The video presenting our experiments is
available at https://sites.google.com/view/rd2-rl
Related papers
- Development of a PPO-Reinforcement Learned Walking Tripedal Soft-Legged Robot using SOFA [0.0]
This paper presents a ready-to-deploy walking, tripedal, soft-legged robot based on PPO-RL.
An 82% success rate in reaching a single goal is a groundbreaking output.
While trailing the platform steps, outperforming discovery has been observed with an accumulative squared error deviation of 19 mm.
arXiv Detail & Related papers (2025-04-12T14:46:51Z) - SERL: A Software Suite for Sample-Efficient Robotic Reinforcement
Learning [85.21378553454672]
We develop a library containing a sample efficient off-policy deep RL method, together with methods for computing rewards and resetting the environment.
We find that our implementation can achieve very efficient learning, acquiring policies for PCB board assembly, cable routing, and object relocation.
These policies achieve perfect or near-perfect success rates, extreme robustness even under perturbations, and exhibit emergent robustness recovery and correction behaviors.
arXiv Detail & Related papers (2024-01-29T10:01:10Z) - Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from
Offline Data [101.43350024175157]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.
Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.
We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Robot Learning on the Job: Human-in-the-Loop Autonomy and Learning
During Deployment [25.186525630548356]
Sirius is a principled framework for humans and robots to collaborate through a division of work.
Partially autonomous robots are tasked with handling a major portion of decision-making where they work reliably.
We introduce a new learning algorithm to improve the policy's performance on the data collected from the task executions.
arXiv Detail & Related papers (2022-11-15T18:53:39Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - Bi-Manual Manipulation and Attachment via Sim-to-Real Reinforcement
Learning [23.164743388342803]
We study how to solve bi-manual tasks using reinforcement learning trained in simulation.
We also discuss modifications to our simulated environment which lead to effective training of RL policies.
In this work, we design a Connect Task, where the aim is for two robot arms to pick up and attach two blocks with magnetic connection points.
arXiv Detail & Related papers (2022-03-15T21:49:20Z) - SAGCI-System: Towards Sample-Efficient, Generalizable, Compositional,
and Incremental Robot Learning [41.19148076789516]
We introduce a systematic learning framework called SAGCI-system towards achieving the above four requirements.
Our system first takes the raw point clouds gathered by the camera mounted on the robot's wrist as the inputs and produces initial modeling of the surrounding environment represented as a URDF.
The robot then utilizes the interactive perception to interact with the environments to online verify and modify the URDF.
arXiv Detail & Related papers (2021-11-29T16:53:49Z) - Accelerating Robotic Reinforcement Learning via Parameterized Action
Primitives [92.0321404272942]
Reinforcement learning can be used to build general-purpose robotic systems.
However, training RL agents to solve robotics tasks still remains challenging.
In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy.
We find that our simple change to the action interface substantially improves both the learning efficiency and task performance.
arXiv Detail & Related papers (2021-10-28T17:59:30Z) - Deep Imitation Learning for Bimanual Robotic Manipulation [70.56142804957187]
We present a deep imitation learning framework for robotic bimanual manipulation.
A core challenge is to generalize the manipulation skills to objects in different locations.
We propose to (i) decompose the multi-modal dynamics into elemental movement primitives, (ii) parameterize each primitive using a recurrent graph neural network to capture interactions, and (iii) integrate a high-level planner that composes primitives sequentially and a low-level controller to combine primitive dynamics and inverse kinematics control.
arXiv Detail & Related papers (2020-10-11T01:40:03Z) - robo-gym -- An Open Source Toolkit for Distributed Deep Reinforcement
Learning on Real and Simulated Robots [0.5161531917413708]
We propose an open source toolkit: robo-gym to increase the use of Deep Reinforcement Learning with real robots.
We demonstrate a unified setup for simulation and real environments which enables a seamless transfer from training in simulation to application on the robot.
We showcase the capabilities and the effectiveness of the framework with two real world applications featuring industrial robots.
arXiv Detail & Related papers (2020-07-06T13:51:33Z) - Smooth Exploration for Robotic Reinforcement Learning [11.215352918313577]
Reinforcement learning (RL) enables robots to learn skills from interactions with the real world.
In practice, the unstructured step-based exploration used in Deep RL leads to jerky motion patterns on real robots.
We address these issues by adapting state-dependent exploration (SDE) to current Deep RL algorithms.
arXiv Detail & Related papers (2020-05-12T12:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.