Feel the Force: Contact-Driven Learning from Humans
- URL: http://arxiv.org/abs/2506.01944v1
- Date: Mon, 02 Jun 2025 17:57:52 GMT
- Title: Feel the Force: Contact-Driven Learning from Humans
- Authors: Ademi Adeniji, Zhuoran Chen, Vincent Liu, Venkatesh Pattabiraman, Raunaq Bhirangi, Siddhant Haldar, Pieter Abbeel, Lerrel Pinto,
- Abstract summary: Controlling fine-grained forces during manipulation remains a core challenge in robotics.<n>We present FeelTheForce, a robot learning system that models human tactile behavior to learn force-sensitive manipulation.<n>Our approach grounds robust low-level force control in scalable human supervision, achieving a 77% success rate across 5 force-sensitive manipulation tasks.
- Score: 52.36160086934298
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Controlling fine-grained forces during manipulation remains a core challenge in robotics. While robot policies learned from robot-collected data or simulation show promise, they struggle to generalize across the diverse range of real-world interactions. Learning directly from humans offers a scalable solution, enabling demonstrators to perform skills in their natural embodiment and in everyday environments. However, visual demonstrations alone lack the information needed to infer precise contact forces. We present FeelTheForce (FTF): a robot learning system that models human tactile behavior to learn force-sensitive manipulation. Using a tactile glove to measure contact forces and a vision-based model to estimate hand pose, we train a closed-loop policy that continuously predicts the forces needed for manipulation. This policy is re-targeted to a Franka Panda robot with tactile gripper sensors using shared visual and action representations. At execution, a PD controller modulates gripper closure to track predicted forces-enabling precise, force-aware control. Our approach grounds robust low-level force control in scalable human supervision, achieving a 77% success rate across 5 force-sensitive manipulation tasks. Code and videos are available at https://feel-the-force-ftf.github.io.
Related papers
- EgoVLA: Learning Vision-Language-Action Models from Egocentric Human Videos [49.820119587446655]
In this paper, we explore training Vision-Language-Action (VLA) models using egocentric human videos.<n>With a VLA trained on human video that predicts human wrist and hand actions, we can perform Inverse Kinematics and convert the human actions to robot actions.<n>We propose a simulation benchmark called Ego Humanoid Manipulation Benchmark, where we design diverse bimanual manipulation tasks with demonstrations.
arXiv Detail & Related papers (2025-07-16T17:27:44Z) - TWIST: Teleoperated Whole-Body Imitation System [28.597388162969057]
We present the Teleoperated Whole-Body Imitation System (TWIST), a system for humanoid teleoperation through whole-body motion imitation.<n>We develop a robust, adaptive, and responsive whole-body controller using a combination of reinforcement learning and behavior cloning.<n>TWIST enables real-world humanoid robots to achieve unprecedented, versatile, and coordinated whole-body motor skills.
arXiv Detail & Related papers (2025-05-05T17:59:03Z) - ForceGrip: Reference-Free Curriculum Learning for Realistic Grip Force Control in VR Hand Manipulation [0.10995326465245926]
We present ForceGrip, a deep learning agent that synthesizes realistic hand manipulation motions.<n>We employ a three-phase curriculum learning framework comprising Finger Positioning, Intention Adaptation, and Dynamic Stabilization.<n>Our evaluations reveal ForceGrip's superior force controllability and plausibility compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-03-11T05:39:07Z) - HOMIE: Humanoid Loco-Manipulation with Isomorphic Exoskeleton Cockpit [52.12750762494588]
This paper introduces HOMIE, a semi-autonomous teleoperation system.<n>It combines a reinforcement learning policy for body control mapped to a pedal, an isomorphic exoskeleton arm for arm control, and motion-sensing gloves for hand control.<n>The system is fully open-source, demos and code can be found in our https://homietele.org/.
arXiv Detail & Related papers (2025-02-18T16:33:38Z) - Built Different: Tactile Perception to Overcome Cross-Embodiment Capability Differences in Collaborative Manipulation [1.9048510647598207]
Tactile sensing is a powerful means of implicit communication between a human and a robot assistant.
In this paper, we investigate how tactile sensing can transcend cross-embodiment differences across robotic systems.
We show how our method can enable a cooperative task where a robot and human must work together to maneuver objects through space.
arXiv Detail & Related papers (2024-09-23T10:45:41Z) - Hand-Object Interaction Pretraining from Videos [77.92637809322231]
We learn general robot manipulation priors from 3D hand-object interaction trajectories.
We do so by sharing both the human hand and the manipulated object in 3D space and human motions to robot actions.
We empirically demonstrate that finetuning this policy, with both reinforcement learning (RL) and behavior cloning (BC), enables sample-efficient adaptation to downstream tasks and simultaneously improves robustness and generalizability compared to prior approaches.
arXiv Detail & Related papers (2024-09-12T17:59:07Z) - Learning Variable Compliance Control From a Few Demonstrations for Bimanual Robot with Haptic Feedback Teleoperation System [5.497832119577795]
dexterous, contact-rich manipulation tasks using rigid robots is a significant challenge in robotics.
Compliance control schemes have been introduced to mitigate these issues by controlling forces via external sensors.
Learning from Demonstrations offers an intuitive alternative, allowing robots to learn manipulations through observed actions.
arXiv Detail & Related papers (2024-06-21T09:03:37Z) - Learning Force Control for Legged Manipulation [18.894304288225385]
We propose a method for training RL policies for direct force control without requiring access to force sensing.
We showcase our method on a whole-body control platform of a quadruped robot with an arm.
We provide the first deployment of learned whole-body force control in legged manipulators, paving the way for more versatile and adaptable legged robots.
arXiv Detail & Related papers (2024-05-02T15:53:43Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Zero-Shot Robot Manipulation from Passive Human Videos [59.193076151832145]
We develop a framework for extracting agent-agnostic action representations from human videos.
Our framework is based on predicting plausible human hand trajectories.
We deploy the trained model zero-shot for physical robot manipulation tasks.
arXiv Detail & Related papers (2023-02-03T21:39:52Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.