Real Robot Challenge using Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2109.15233v1
- Date: Thu, 30 Sep 2021 16:12:17 GMT
- Title: Real Robot Challenge using Deep Reinforcement Learning
- Authors: Robert McCarthy, Francisco Roldan Sanchez, Kevin McGuinness, Noel
O'Connor, Stephen J. Redmond
- Abstract summary: This paper details our winning submission to Phase 1 of the 2021 Real Robot Challenge.
The challenge is in which a three fingered robot must carry a cube along specified goal trajectories.
We use a pure reinforcement learning approach which requires minimal expert knowledge of the robotic system.
- Score: 6.332038240397164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper details our winning submission to Phase 1 of the 2021 Real Robot
Challenge, a challenge in which a three fingered robot must carry a cube along
specified goal trajectories. To solve Phase 1, we use a pure reinforcement
learning approach which requires minimal expert knowledge of the robotic system
or of robotic grasping in general. A sparse goal-based reward is employed in
conjunction with Hindsight Experience Replay to teach the control policy to
move the cube to the desired x and y coordinates. Simultaneously, a dense
distance-based reward is employed to teach the policy to lift the cube to the
desired z coordinate. The policy is trained in simulation with domain
randomization before being transferred to the real robot for evaluation.
Although performance tends to worsen after this transfer, our best trained
policy can successfully lift the real cube along goal trajectories via the use
of an effective pinching grasp. Our approach outperforms all other submissions,
including those leveraging more traditional robotic control techniques, and is
the first learning-based approach to solve this challenge.
Related papers
- Single-Shot Learning of Stable Dynamical Systems for Long-Horizon Manipulation Tasks [48.54757719504994]
This paper focuses on improving task success rates while reducing the amount of training data needed.
Our approach introduces a novel method that segments long-horizon demonstrations into discrete steps defined by waypoints and subgoals.
We validate our approach through both simulation and real-world experiments, demonstrating effective transfer from simulation to physical robotic platforms.
arXiv Detail & Related papers (2024-10-01T19:49:56Z) - Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - Towards Real-World Efficiency: Domain Randomization in Reinforcement Learning for Pre-Capture of Free-Floating Moving Targets by Autonomous Robots [0.0]
We introduce a deep reinforcement learning-based control approach to address the intricate challenge of the robotic pre-grasping phase under microgravity conditions.
Our methodology incorporates an off-policy reinforcement learning framework, employing the soft actor-critic technique to enable the gripper to proficiently approach a free-floating moving object.
For effective learning of the pre-grasping approach task, we developed a reward function that offers the agent clear and insightful feedback.
arXiv Detail & Related papers (2024-06-10T16:54:51Z) - Contact Energy Based Hindsight Experience Prioritization [19.42106651692228]
Multi-goal robot manipulation tasks with sparse rewards are difficult for reinforcement learning (RL) algorithms.
Recent algorithms such as Hindsight Experience Replay (HER) expedite learning by taking advantage of failed trajectories.
We propose a novel approach Contact Energy Based Prioritization(CEBP) to select the samples from the replay buffer based on rich information due to contact.
arXiv Detail & Related papers (2023-12-05T11:32:25Z) - Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from
Offline Data [101.43350024175157]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.
Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.
We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - Advanced Skills by Learning Locomotion and Local Navigation End-to-End [10.872193480485596]
In this work, we propose to solve the complete problem by training an end-to-end policy with deep reinforcement learning.
We demonstrate the successful deployment of policies on a real quadrupedal robot.
arXiv Detail & Related papers (2022-09-26T16:35:00Z) - Dexterous Robotic Manipulation using Deep Reinforcement Learning and
Knowledge Transfer for Complex Sparse Reward-based Tasks [23.855931395239747]
This paper describes a deep reinforcement learning (DRL) approach that won Phase 1 of the Real Robot Challenge (RRC) 2021.
We extend this method by modifying the task of Phase 1 of the RRC to require the robot to maintain the cube in a particular orientation.
arXiv Detail & Related papers (2022-05-19T16:40:22Z) - Reinforcement Learning Experiments and Benchmark for Solving Robotic
Reaching Tasks [0.0]
Reinforcement learning has been successfully applied to solving the reaching task with robotic arms.
It is shown that augmenting the reward signal with the Hindsight Experience Replay exploration technique increases the average return of off-policy agents.
arXiv Detail & Related papers (2020-11-11T14:00:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.