A Reinforcement Learning Approach for Robotic Unloading from Visual
Observations
- URL: http://arxiv.org/abs/2309.06621v1
- Date: Tue, 12 Sep 2023 22:22:28 GMT
- Title: A Reinforcement Learning Approach for Robotic Unloading from Visual
Observations
- Authors: Vittorio Giammarino, Alberto Giammarino, Matthew Pearce
- Abstract summary: In this work, we focus on a robotic unloading problem from visual observations.
We propose a hierarchical controller structure that combines a high-level decision-making module with classical motion control.
Our experiments demonstrate that both these elements play a crucial role in achieving improved learning performance.
- Score: 1.420663986837751
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we focus on a robotic unloading problem from visual
observations, where robots are required to autonomously unload stacks of
parcels using RGB-D images as their primary input source. While supervised and
imitation learning have accomplished good results in these types of tasks, they
heavily rely on labeled data, which are challenging to obtain in realistic
scenarios. Our study aims to develop a sample efficient controller framework
that can learn unloading tasks without the need for labeled data during the
learning process. To tackle this challenge, we propose a hierarchical
controller structure that combines a high-level decision-making module with
classical motion control. The high-level module is trained using Deep
Reinforcement Learning (DRL), wherein we incorporate a safety bias mechanism
and design a reward function tailored to this task. Our experiments demonstrate
that both these elements play a crucial role in achieving improved learning
performance. Furthermore, to ensure reproducibility and establish a benchmark
for future research, we provide free access to our code and simulation.
Related papers
- Affordance-Guided Reinforcement Learning via Visual Prompting [51.361977466993345]
We study rewards shaped by vision-language models (VLMs) to define dense rewards for robotic learning.
On a real-world manipulation task specified by natural language description, we find that these rewards improve the sample efficiency of autonomous RL.
arXiv Detail & Related papers (2024-07-14T21:41:29Z) - Active Exploration in Bayesian Model-based Reinforcement Learning for Robot Manipulation [8.940998315746684]
We propose a model-based reinforcement learning (RL) approach for robotic arm end-tasks.
We employ Bayesian neural network models to represent, in a probabilistic way, both the belief and information encoded in the dynamic model during exploration.
Our experiments show the advantages of our Bayesian model-based RL approach, with similar quality in the results than relevant alternatives.
arXiv Detail & Related papers (2024-04-02T11:44:37Z) - SERL: A Software Suite for Sample-Efficient Robotic Reinforcement
Learning [85.21378553454672]
We develop a library containing a sample efficient off-policy deep RL method, together with methods for computing rewards and resetting the environment.
We find that our implementation can achieve very efficient learning, acquiring policies for PCB board assembly, cable routing, and object relocation.
These policies achieve perfect or near-perfect success rates, extreme robustness even under perturbations, and exhibit emergent robustness recovery and correction behaviors.
arXiv Detail & Related papers (2024-01-29T10:01:10Z) - Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - Reinforcement Learning in Robotic Motion Planning by Combined
Experience-based Planning and Self-Imitation Learning [7.919213739992465]
High-quality and representative data is essential for both Imitation Learning (IL)- and Reinforcement Learning (RL)-based motion planning tasks.
We propose self-imitation learning by planning plus (SILP+) algorithm, which embeds experience-based planning into the learning architecture.
Various experimental results show that SILP+ achieves better training efficiency higher and more stable success rate in complex motion planning tasks.
arXiv Detail & Related papers (2023-06-11T19:47:46Z) - Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from
Offline Data [101.43350024175157]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.
Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.
We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - Hindsight States: Blending Sim and Real Task Elements for Efficient
Reinforcement Learning [61.3506230781327]
In robotics, one approach to generate training data builds on simulations based on dynamics models derived from first principles.
Here, we leverage the imbalance in complexity of the dynamics to learn more sample-efficiently.
We validate our method on several challenging simulated tasks and demonstrate that it improves learning both alone and when combined with an existing hindsight algorithm.
arXiv Detail & Related papers (2023-03-03T21:55:04Z) - DRL: Deep Reinforcement Learning for Intelligent Robot Control --
Concept, Literature, and Future [0.0]
Combination of machine learning, computer vision, and robotic systems motivates this work toward proposing a vision-based learning framework for intelligent robot control as the ultimate goal (vision-based learning robot)
This work specifically introduces deep reinforcement learning as the the learning framework, a General-purpose framework for AI (AGI) meaning application-independent and platform-independent.
arXiv Detail & Related papers (2021-04-20T15:26:10Z) - Parrot: Data-Driven Behavioral Priors for Reinforcement Learning [79.32403825036792]
We propose a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials.
We show how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors.
arXiv Detail & Related papers (2020-11-19T18:47:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.