Dexterous Robotic Manipulation using Deep Reinforcement Learning and
Knowledge Transfer for Complex Sparse Reward-based Tasks
- URL: http://arxiv.org/abs/2205.09683v1
- Date: Thu, 19 May 2022 16:40:22 GMT
- Title: Dexterous Robotic Manipulation using Deep Reinforcement Learning and
Knowledge Transfer for Complex Sparse Reward-based Tasks
- Authors: Qiang Wang, Francisco Roldan Sanchez, Robert McCarthy, David Cordova
Bulens, Kevin McGuinness, Noel O'Connor, Manuel W\"uthrich, Felix Widmaier,
Stefan Bauer, Stephen J. Redmond
- Abstract summary: This paper describes a deep reinforcement learning (DRL) approach that won Phase 1 of the Real Robot Challenge (RRC) 2021.
We extend this method by modifying the task of Phase 1 of the RRC to require the robot to maintain the cube in a particular orientation.
- Score: 23.855931395239747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper describes a deep reinforcement learning (DRL) approach that won
Phase 1 of the Real Robot Challenge (RRC) 2021, and then extends this method to
a more difficult manipulation task. The RRC consisted of using a TriFinger
robot to manipulate a cube along a specified positional trajectory, but with no
requirement for the cube to have any specific orientation. We used a relatively
simple reward function, a combination of goal-based sparse reward and distance
reward, in conjunction with Hindsight Experience Replay (HER) to guide the
learning of the DRL agent (Deep Deterministic Policy Gradient (DDPG)). Our
approach allowed our agents to acquire dexterous robotic manipulation
strategies in simulation. These strategies were then applied to the real robot
and outperformed all other competition submissions, including those using more
traditional robotic control techniques, in the final evaluation stage of the
RRC. Here we extend this method, by modifying the task of Phase 1 of the RRC to
require the robot to maintain the cube in a particular orientation, while the
cube is moved along the required positional trajectory. The requirement to also
orient the cube makes the agent unable to learn the task through blind
exploration due to increased problem complexity. To circumvent this issue, we
make novel use of a Knowledge Transfer (KT) technique that allows the
strategies learned by the agent in the original task (which was agnostic to
cube orientation) to be transferred to this task (where orientation matters).
KT allowed the agent to learn and perform the extended task in the simulator,
which improved the average positional deviation from 0.134 m to 0.02 m, and
average orientation deviation from 142{\deg} to 76{\deg} during evaluation.
This KT concept shows good generalisation properties and could be applied to
any actor-critic learning algorithm.
Related papers
- SERL: A Software Suite for Sample-Efficient Robotic Reinforcement
Learning [85.21378553454672]
We develop a library containing a sample efficient off-policy deep RL method, together with methods for computing rewards and resetting the environment.
We find that our implementation can achieve very efficient learning, acquiring policies for PCB board assembly, cable routing, and object relocation.
These policies achieve perfect or near-perfect success rates, extreme robustness even under perturbations, and exhibit emergent robustness recovery and correction behaviors.
arXiv Detail & Related papers (2024-01-29T10:01:10Z) - Mission-driven Exploration for Accelerated Deep Reinforcement Learning
with Temporal Logic Task Specifications [11.812602599752294]
We consider robots with unknown dynamics operating in environments with unknown structure.
Our goal is to synthesize a control policy that maximizes the probability of satisfying an automaton-encoded task.
We propose a novel DRL algorithm, which has the capability to learn control policies at a notably faster rate compared to similar methods.
arXiv Detail & Related papers (2023-11-28T18:59:58Z) - Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from
Offline Data [101.43350024175157]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.
Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.
We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - CLUTR: Curriculum Learning via Unsupervised Task Representation Learning [130.79246770546413]
CLUTR is a novel curriculum learning algorithm that decouples task representation and curriculum learning into a two-stage optimization.
We show CLUTR outperforms PAIRED, a principled and popular UED method, in terms of generalization and sample efficiency in the challenging CarRacing and navigation environments.
arXiv Detail & Related papers (2022-10-19T01:45:29Z) - Robot Learning of Mobile Manipulation with Reachability Behavior Priors [38.49783454634775]
Mobile Manipulation (MM) systems are ideal candidates for taking up the role of a personal assistant in unstructured real-world environments.
Among other challenges, MM requires effective coordination of the robot's embodiments for executing tasks that require both mobility and manipulation.
We study the integration of robotic reachability priors in actor-critic RL methods for accelerating the learning of MM for reaching and fetching tasks.
arXiv Detail & Related papers (2022-03-08T12:44:42Z) - Accelerating Robotic Reinforcement Learning via Parameterized Action
Primitives [92.0321404272942]
Reinforcement learning can be used to build general-purpose robotic systems.
However, training RL agents to solve robotics tasks still remains challenging.
In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy.
We find that our simple change to the action interface substantially improves both the learning efficiency and task performance.
arXiv Detail & Related papers (2021-10-28T17:59:30Z) - Real Robot Challenge using Deep Reinforcement Learning [6.332038240397164]
This paper details our winning submission to Phase 1 of the 2021 Real Robot Challenge.
The challenge is in which a three fingered robot must carry a cube along specified goal trajectories.
We use a pure reinforcement learning approach which requires minimal expert knowledge of the robotic system.
arXiv Detail & Related papers (2021-09-30T16:12:17Z) - CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and
Transfer Learning [138.40338621974954]
CausalWorld is a benchmark for causal structure and transfer learning in a robotic manipulation environment.
Tasks consist of constructing 3D shapes from a given set of blocks - inspired by how children learn to build complex structures.
arXiv Detail & Related papers (2020-10-08T23:01:13Z) - Deep Adversarial Reinforcement Learning for Object Disentangling [36.66974848126079]
We present a novel adversarial reinforcement learning (ARL) framework for disentangling waste objects.
The ARL framework utilizes an adversary, which is trained to steer the original agent, the protagonist, to challenging states.
We show that our method can generalize from training to test scenarios by training an end-to-end system for robot control to solve a challenging object disentangling task.
arXiv Detail & Related papers (2020-03-08T13:20:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.