Physics-Guided Hierarchical Reward Mechanism for Learning-Based Robotic
Grasping
- URL: http://arxiv.org/abs/2205.13561v3
- Date: Sun, 23 Jul 2023 23:52:45 GMT
- Title: Physics-Guided Hierarchical Reward Mechanism for Learning-Based Robotic
Grasping
- Authors: Yunsik Jung, Lingfeng Tao, Michael Bowman, Jiucai Zhang, Xiaoli Zhang
- Abstract summary: We develop a Physics-Guided Deep Reinforcement Learning with a Hierarchical Reward Mechanism to improve learning efficiency and generalizability for learning-based autonomous grasping.
Our method is validated in robotic grasping tasks with a 3-finger MICO robot arm.
- Score: 10.424363966870775
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Learning-based grasping can afford real-time grasp motion planning of
multi-fingered robotics hands thanks to its high computational efficiency.
However, learning-based methods are required to explore large search spaces
during the learning process. The search space causes low learning efficiency,
which has been the main barrier to its practical adoption. In addition, the
trained policy lacks a generalizable outcome unless objects are identical to
the trained objects. In this work, we develop a novel Physics-Guided Deep
Reinforcement Learning with a Hierarchical Reward Mechanism to improve learning
efficiency and generalizability for learning-based autonomous grasping. Unlike
conventional observation-based grasp learning, physics-informed metrics are
utilized to convey correlations between features associated with hand
structures and objects to improve learning efficiency and outcomes. Further,
the hierarchical reward mechanism enables the robot to learn prioritized
components of the grasping tasks. Our method is validated in robotic grasping
tasks with a 3-finger MICO robot arm. The results show that our method
outperformed the standard Deep Reinforcement Learning methods in various
robotic grasping tasks.
Related papers
- SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation [58.14969377419633]
We propose spire, a system that decomposes tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths.
We find that spire outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance.
arXiv Detail & Related papers (2024-10-23T17:42:07Z) - Advancing Household Robotics: Deep Interactive Reinforcement Learning for Efficient Training and Enhanced Performance [0.0]
Reinforcement learning, or RL, has emerged as a key robotics technology that enables robots to interact with their environment.
We present a novel method to preserve and reuse information and advice via Deep Interactive Reinforcement Learning.
arXiv Detail & Related papers (2024-05-29T01:46:50Z) - Tactile Active Inference Reinforcement Learning for Efficient Robotic
Manipulation Skill Acquisition [10.072992621244042]
We propose a novel method for skill learning in robotic manipulation called Tactile Active Inference Reinforcement Learning (Tactile-AIRL)
To enhance the performance of reinforcement learning (RL), we introduce active inference, which integrates model-based techniques and intrinsic curiosity into the RL process.
We demonstrate that our method achieves significantly high training efficiency in non-prehensile objects pushing tasks.
arXiv Detail & Related papers (2023-11-19T10:19:22Z) - Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - Active Hierarchical Imitation and Reinforcement Learning [0.0]
In this project, we explored different imitation learning algorithms and designed active learning algorithms upon the hierarchical imitation and reinforcement learning framework we have developed.
Our experimental results showed that using DAgger and reward-based active learning method can achieve better performance while saving more human efforts physically and mentally during the training process.
arXiv Detail & Related papers (2020-12-14T08:27:27Z) - Efficient reinforcement learning control for continuum robots based on
Inexplicit Prior Knowledge [3.3645162441357437]
We propose an efficient reinforcement learning method based on inexplicit prior knowledge.
By using our method, we can achieve active visual tracking and distance maintenance of a tendon-driven robot.
arXiv Detail & Related papers (2020-02-26T15:47:11Z) - Scalable Multi-Task Imitation Learning with Autonomous Improvement [159.9406205002599]
We build an imitation learning system that can continuously improve through autonomous data collection.
We leverage the robot's own trials as demonstrations for tasks other than the one that the robot actually attempted.
In contrast to prior imitation learning approaches, our method can autonomously collect data with sparse supervision for continuous improvement.
arXiv Detail & Related papers (2020-02-25T18:56:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.