In-Hand Object Rotation via Rapid Motor Adaptation
- URL: http://arxiv.org/abs/2210.04887v1
- Date: Mon, 10 Oct 2022 17:58:45 GMT
- Title: In-Hand Object Rotation via Rapid Motor Adaptation
- Authors: Haozhi Qi, Ashish Kumar, Roberto Calandra, Yi Ma, Jitendra Malik
- Abstract summary: We show how to design and learn a simple adaptive controller to achieve in-hand object rotation using only fingertips.
The controller is trained entirely in simulation on only cylindrical objects.
It can be directly deployed to a real robot hand to rotate dozens of objects with diverse sizes, shapes, and weights over the z-axis.
- Score: 59.59946962428837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generalized in-hand manipulation has long been an unsolved challenge of
robotics. As a small step towards this grand goal, we demonstrate how to design
and learn a simple adaptive controller to achieve in-hand object rotation using
only fingertips. The controller is trained entirely in simulation on only
cylindrical objects, which then - without any fine-tuning - can be directly
deployed to a real robot hand to rotate dozens of objects with diverse sizes,
shapes, and weights over the z-axis. This is achieved via rapid online
adaptation of the controller to the object properties using only proprioception
history. Furthermore, natural and stable finger gaits automatically emerge from
training the control policy via reinforcement learning. Code and more videos
are available at https://haozhi.io/hora
Related papers
- Learning Force Control for Legged Manipulation [18.894304288225385]
We propose a method for training RL policies for direct force control without requiring access to force sensing.
We showcase our method on a whole-body control platform of a quadruped robot with an arm.
We provide the first deployment of learned whole-body force control in legged manipulators, paving the way for more versatile and adaptable legged robots.
arXiv Detail & Related papers (2024-05-02T15:53:43Z) - DexDribbler: Learning Dexterous Soccer Manipulation via Dynamic Supervision [26.9579556496875]
Joint manipulation of moving objects and locomotion with legs, such as playing soccer, receive scant attention in the learning community.
We propose a feedback control block to compute the necessary body-level movement accurately and using the outputs as dynamic joint-level locomotion supervision.
We observe that our learning scheme can not only make the policy network converge faster but also enable soccer robots to perform sophisticated maneuvers.
arXiv Detail & Related papers (2024-03-21T11:16:28Z) - Rotating without Seeing: Towards In-hand Dexterity through Touch [43.87509744768282]
We present Touch Dexterity, a new system that can perform in-hand object rotation using only touching without seeing the object.
Instead of relying on precise tactile sensing in a small region, we introduce a new system design using dense binary force sensors (touch or no touch) overlaying one side of the whole robot hand.
We train an in-hand rotation policy using Reinforcement Learning on diverse objects in simulation. Relying on touch-only sensing, we can directly deploy the policy in a real robot hand and rotate novel objects that are not presented in training.
arXiv Detail & Related papers (2023-03-20T05:38:30Z) - Learning to Transfer In-Hand Manipulations Using a Greedy Shape
Curriculum [79.6027464700869]
We show that natural and robust in-hand manipulation of simple objects in a dynamic simulation can be learned from a high quality motion capture example.
We propose a simple greedy curriculum search algorithm that can successfully apply to a range of objects such as a teapot, bunny, bottle, train, and elephant.
arXiv Detail & Related papers (2023-03-14T17:08:19Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Learning a Single Near-hover Position Controller for Vastly Different
Quadcopters [56.37274861303324]
This paper proposes an adaptive near-hover position controller for quadcopters.
It can be deployed to quadcopters of very different mass, size and motor constants.
It also shows rapid adaptation to unknown disturbances during runtime.
arXiv Detail & Related papers (2022-09-19T17:55:05Z) - Learning fast and agile quadrupedal locomotion over complex terrain [0.3806109052869554]
We propose a robust controller that achieves natural and stably fast locomotion on a real blind quadruped robot.
The controller is trained in the simulation environment by model-free reinforcement learning.
Our controller has excellent anti-disturbance performance, and has good generalization ability to reach locomotion speeds it has never learned.
arXiv Detail & Related papers (2022-07-02T11:20:07Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - Design and Control of Roller Grasper V2 for In-Hand Manipulation [6.064252790182275]
We present a novel non-anthropomorphic robot grasper with the ability to manipulate objects by means of active surfaces at the fingertips.
Active surfaces are achieved by spherical rolling fingertips with two degrees of freedom (DoF)
A further DoF is in the base of each finger, allowing the fingers to grasp objects over a range of size and shapes.
arXiv Detail & Related papers (2020-04-18T00:54:09Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.