Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance
- URL: http://arxiv.org/abs/2212.09902v1
- Date: Mon, 19 Dec 2022 22:50:40 GMT
- Title: Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance
- Authors: Kelvin Xu, Zheyuan Hu, Ria Doshi, Aaron Rovinsky, Vikash Kumar,
Abhishek Gupta, Sergey Levine
- Abstract summary: We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
- Score: 71.36749876465618
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Complex and contact-rich robotic manipulation tasks, particularly those that
involve multi-fingered hands and underactuated object manipulation, present a
significant challenge to any control method. Methods based on reinforcement
learning offer an appealing choice for such settings, as they can enable robots
to learn to delicately balance contact forces and dexterously reposition
objects without strong modeling assumptions. However, running reinforcement
learning on real-world dexterous manipulation systems often requires
significant manual engineering. This negates the benefits of autonomous data
collection and ease of use that reinforcement learning should in principle
provide. In this paper, we describe a system for vision-based dexterous
manipulation that provides a "programming-free" approach for users to define
new tasks and enable robots with complex multi-fingered hands to learn to
perform them through interaction. The core principle underlying our system is
that, in a vision-based setting, users should be able to provide high-level
intermediate supervision that circumvents challenges in teleoperation or
kinesthetic teaching which allow a robot to not only learn a task efficiently
but also to autonomously practice. Our system includes a framework for users to
define a final task and intermediate sub-tasks with image examples, a
reinforcement learning procedure that learns the task autonomously without
interventions, and experimental results with a four-finger robotic hand
learning multi-stage object manipulation tasks directly in the real world,
without simulation, manual modeling, or reward engineering.
Related papers
- Unsupervised Skill Discovery for Robotic Manipulation through Automatic Task Generation [17.222197596599685]
We propose a Skill Learning approach that discovers composable behaviors by solving a large number of autonomously generated tasks.
Our method learns skills allowing the robot to consistently and robustly interact with objects in its environment.
The learned skills can be used to solve a set of unseen manipulation tasks, in simulation as well as on a real robotic platform.
arXiv Detail & Related papers (2024-10-07T09:19:13Z) - Tactile Active Inference Reinforcement Learning for Efficient Robotic
Manipulation Skill Acquisition [10.072992621244042]
We propose a novel method for skill learning in robotic manipulation called Tactile Active Inference Reinforcement Learning (Tactile-AIRL)
To enhance the performance of reinforcement learning (RL), we introduce active inference, which integrates model-based techniques and intrinsic curiosity into the RL process.
We demonstrate that our method achieves significantly high training efficiency in non-prehensile objects pushing tasks.
arXiv Detail & Related papers (2023-11-19T10:19:22Z) - Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - Human-oriented Representation Learning for Robotic Manipulation [64.59499047836637]
Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
arXiv Detail & Related papers (2023-10-04T17:59:38Z) - Physics-Guided Hierarchical Reward Mechanism for Learning-Based Robotic
Grasping [10.424363966870775]
We develop a Physics-Guided Deep Reinforcement Learning with a Hierarchical Reward Mechanism to improve learning efficiency and generalizability for learning-based autonomous grasping.
Our method is validated in robotic grasping tasks with a 3-finger MICO robot arm.
arXiv Detail & Related papers (2022-05-26T18:01:56Z) - BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning [108.41464483878683]
We study the problem of enabling a vision-based robotic manipulation system to generalize to novel tasks.
We develop an interactive and flexible imitation learning system that can learn from both demonstrations and interventions.
When scaling data collection on a real robot to more than 100 distinct tasks, we find that this system can perform 24 unseen manipulation tasks with an average success rate of 44%.
arXiv Detail & Related papers (2022-02-04T07:30:48Z) - An Empowerment-based Solution to Robotic Manipulation Tasks with Sparse
Rewards [14.937474939057596]
It is important for robotic manipulators to learn to accomplish tasks even if they are only provided with very sparse instruction signals.
This paper proposes an intrinsic motivation approach that can be easily integrated into any standard reinforcement learning algorithm.
arXiv Detail & Related papers (2020-10-15T19:06:21Z) - The Ingredients of Real-World Robotic Reinforcement Learning [71.92831985295163]
We discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world.
We propose a particular instantiation of such a system, using dexterous manipulation as our case study.
We demonstrate that our complete system can learn without any human intervention, acquiring a variety of vision-based skills with a real-world three-fingered hand.
arXiv Detail & Related papers (2020-04-27T03:36:10Z) - Scalable Multi-Task Imitation Learning with Autonomous Improvement [159.9406205002599]
We build an imitation learning system that can continuously improve through autonomous data collection.
We leverage the robot's own trials as demonstrations for tasks other than the one that the robot actually attempted.
In contrast to prior imitation learning approaches, our method can autonomously collect data with sparse supervision for continuous improvement.
arXiv Detail & Related papers (2020-02-25T18:56:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.