RUKA: Rethinking the Design of Humanoid Hands with Learning
- URL: http://arxiv.org/abs/2504.13165v1
- Date: Thu, 17 Apr 2025 17:58:59 GMT
- Title: RUKA: Rethinking the Design of Humanoid Hands with Learning
- Authors: Anya Zorin, Irmak Guzey, Billy Yan, Aadhithya Iyer, Lisa Kondrich, Nikhil X. Bhattasali, Lerrel Pinto,
- Abstract summary: This work presents RUKA, a tendon-driven humanoid hand that is compact, affordable, and capable.<n> RUKA has 5 fingers with 15 under degrees of freedom enabling diverse human-like grasps.<n>To address control challenges, we learn joint-to-actuator and fingertip-to-actuator models from motion-capture data collected by the MANUS glove.
- Score: 15.909251187339228
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Dexterous manipulation is a fundamental capability for robotic systems, yet progress has been limited by hardware trade-offs between precision, compactness, strength, and affordability. Existing control methods impose compromises on hand designs and applications. However, learning-based approaches present opportunities to rethink these trade-offs, particularly to address challenges with tendon-driven actuation and low-cost materials. This work presents RUKA, a tendon-driven humanoid hand that is compact, affordable, and capable. Made from 3D-printed parts and off-the-shelf components, RUKA has 5 fingers with 15 underactuated degrees of freedom enabling diverse human-like grasps. Its tendon-driven actuation allows powerful grasping in a compact, human-sized form factor. To address control challenges, we learn joint-to-actuator and fingertip-to-actuator models from motion-capture data collected by the MANUS glove, leveraging the hand's morphological accuracy. Extensive evaluations demonstrate RUKA's superior reachability, durability, and strength compared to other robotic hands. Teleoperation tasks further showcase RUKA's dexterous movements. The open-source design and assembly instructions of RUKA, code, and data are available at https://ruka-hand.github.io/.
Related papers
- Learning Bimanual Manipulation via Action Chunking and Inter-Arm Coordination with Transformers [4.119006369973485]
We focus on coordination and efficiency between both arms, particularly synchronized actions.<n>We propose a novel imitation learning architecture that predicts cooperative actions.<n>Our model demonstrated a high success rate for comparison and suggested a suitable architecture for the policy learning of bimanual manipulation.
arXiv Detail & Related papers (2025-03-18T05:20:34Z) - HOMIE: Humanoid Loco-Manipulation with Isomorphic Exoskeleton Cockpit [52.12750762494588]
Current humanoid teleoperation systems either lack reliable low-level control policies, or struggle to acquire accurate whole-body control commands.<n>We propose a novel humanoid teleoperation cockpit integrates a humanoid loco-manipulation policy and a low-cost exoskeleton-based hardware system.
arXiv Detail & Related papers (2025-02-18T16:33:38Z) - From Human Hands to Robotic Limbs: A Study in Motor Skill Embodiment for Telemanipulation [3.7482358401236398]
We propose a GRU-based Variational Autoencoder to learn a latent representation of the manipulator's configuration space.<n>A fully connected neural network maps human arm configurations into this latent space, allowing the system to mimic and generate corresponding manipulator trajectories in real time.
arXiv Detail & Related papers (2025-02-04T05:52:57Z) - Learning Visuotactile Skills with Two Multifingered Hands [80.99370364907278]
We explore learning from human demonstrations using a bimanual system with multifingered hands and visuotactile data.
Our results mark a promising step forward in bimanual multifingered manipulation from visuotactile data.
arXiv Detail & Related papers (2024-04-25T17:59:41Z) - Proprioceptive External Torque Learning for Floating Base Robot and its
Applications to Humanoid Locomotion [17.384713355349476]
This paper introduces a method for learning external joint torque solely using proprioceptive sensors (encoders and IMUs) for a floating base robot.
Real robot experiments demonstrate that the network can estimate the external torque and contact wrench with significantly smaller errors.
The study also validates that the estimated contact wrench can be utilized for zero moment point (ZMP) feedback control.
arXiv Detail & Related papers (2023-09-08T05:33:56Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - From One Hand to Multiple Hands: Imitation Learning for Dexterous
Manipulation from Single-Camera Teleoperation [26.738893736520364]
We introduce a novel single-camera teleoperation system to collect the 3D demonstrations efficiently with only an iPad and a computer.
We construct a customized robot hand for each user in the physical simulator, which is a manipulator resembling the same kinematics structure and shape of the operator's hand.
With imitation learning using our data, we show large improvement over baselines with multiple complex manipulation tasks.
arXiv Detail & Related papers (2022-04-26T17:59:51Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - Generalization Through Hand-Eye Coordination: An Action Space for
Learning Spatially-Invariant Visuomotor Control [67.23580984118479]
Imitation Learning (IL) is an effective framework to learn visuomotor skills from offline demonstration data.
Hand-eye Action Networks (HAN) can approximate human's hand-eye coordination behaviors by learning from human teleoperated demonstrations.
arXiv Detail & Related papers (2021-02-28T01:49:13Z) - Physics-Based Dexterous Manipulations with Estimated Hand Poses and
Residual Reinforcement Learning [52.37106940303246]
We learn a model that maps noisy input hand poses to target virtual poses.
The agent is trained in a residual setting by using a model-free hybrid RL+IL approach.
We test our framework in two applications that use hand pose estimates for dexterous manipulations: hand-object interactions in VR and hand-object motion reconstruction in-the-wild.
arXiv Detail & Related papers (2020-08-07T17:34:28Z) - DIGIT: A Novel Design for a Low-Cost Compact High-Resolution Tactile
Sensor with Application to In-Hand Manipulation [16.54834671357377]
General purpose in-hand manipulation remains one of the unsolved challenges of robotics.
We introduce DIGIT, an inexpensive, compact, and high-resolution tactile sensor geared towards in-hand manipulation.
arXiv Detail & Related papers (2020-05-29T17:07:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.