ForceGrip: Reference-Free Curriculum Learning for Realistic Grip Force Control in VR Hand Manipulation
- URL: http://arxiv.org/abs/2503.08061v3
- Date: Wed, 30 Apr 2025 14:03:25 GMT
- Title: ForceGrip: Reference-Free Curriculum Learning for Realistic Grip Force Control in VR Hand Manipulation
- Authors: DongHeun Han, Byungmin Kim, RoUn Lee, KyeongMin Kim, Hyoseok Hwang, HyeongYeop Kang,
- Abstract summary: We present ForceGrip, a deep learning agent that synthesizes realistic hand manipulation motions.<n>We employ a three-phase curriculum learning framework comprising Finger Positioning, Intention Adaptation, and Dynamic Stabilization.<n>Our evaluations reveal ForceGrip's superior force controllability and plausibility compared to state-of-the-art methods.
- Score: 0.10995326465245926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Realistic Hand manipulation is a key component of immersive virtual reality (VR), yet existing methods often rely on kinematic approach or motion-capture datasets that omit crucial physical attributes such as contact forces and finger torques. Consequently, these approaches prioritize tight, one-size-fits-all grips rather than reflecting users' intended force levels. We present ForceGrip, a deep learning agent that synthesizes realistic hand manipulation motions, faithfully reflecting the user's grip force intention. Instead of mimicking predefined motion datasets, ForceGrip uses generated training scenarios-randomizing object shapes, wrist movements, and trigger input flows-to challenge the agent with a broad spectrum of physical interactions. To effectively learn from these complex tasks, we employ a three-phase curriculum learning framework comprising Finger Positioning, Intention Adaptation, and Dynamic Stabilization. This progressive strategy ensures stable hand-object contact, adaptive force control based on user inputs, and robust handling under dynamic conditions. Additionally, a proximity reward function enhances natural finger motions and accelerates training convergence. Quantitative and qualitative evaluations reveal ForceGrip's superior force controllability and plausibility compared to state-of-the-art methods. Demo videos are available as supplementary material and the code is provided at https://han-dongheun.github.io/ForceGrip.
Related papers
- Dynamic object goal pushing with mobile manipulators through model-free constrained reinforcement learning [9.305146484955296]
We develop a learning-based controller for a mobile manipulator to move an unknown object to a desired position and yaw orientation through a sequence of pushing actions.
The proposed controller for the robotic arm and the mobile base motion is trained using a constrained Reinforcement Learning (RL) formulation.
The learned policy achieves a success rate of 91.35% in simulation and at least 80% on hardware in challenging scenarios.
arXiv Detail & Related papers (2025-02-03T17:28:35Z) - Learning Gentle Grasping from Human-Free Force Control Demonstration [4.08734863805696]
We propose an approach for learning grasping from ideal force control demonstrations to achieve similar performance of human hands with limited data size.<n>Our approach utilizes objects with known contact characteristics to automatically generate reference force curves without human demonstrations.<n>The described method can be effectively applied in vision-based tactile sensors and enables gentle and stable grasping of objects from the ground.
arXiv Detail & Related papers (2024-09-16T15:14:53Z) - AnyRotate: Gravity-Invariant In-Hand Object Rotation with Sim-to-Real Touch [9.606323817785114]
We present AnyRotate, a system for gravity-invariant multi-axis in-hand object rotation using dense featured sim-to-real touch.
Our formulation allows the training of a unified policy to rotate unseen objects about arbitrary rotation axes in any hand direction.
Rich multi-fingered tactile sensing can detect unstable grasps and provide a reactive behavior that improves the robustness of the policy.
arXiv Detail & Related papers (2024-05-12T22:51:35Z) - Continual Policy Distillation of Reinforcement Learning-based Controllers for Soft Robotic In-Hand Manipulation [5.601529531526852]
Soft robotic hands offer flexibility and adaptability during object grasping and manipulation.
We introduce a Continual Policy Distillation framework to acquire a versatile controller for in-hand manipulation.
arXiv Detail & Related papers (2024-04-05T17:05:45Z) - Twisting Lids Off with Two Hands [82.21668778600414]
We show how policies trained in simulation can be effectively and efficiently transferred to the real world.
Specifically, we consider the problem of twisting lids of various bottle-like objects with two hands.
This is the first sim-to-real RL system that enables such capabilities on bimanual multi-fingered hands.
arXiv Detail & Related papers (2024-03-04T18:59:30Z) - Modular Neural Network Policies for Learning In-Flight Object Catching
with a Robot Hand-Arm System [55.94648383147838]
We present a modular framework designed to enable a robot hand-arm system to learn how to catch flying objects.
Our framework consists of five core modules: (i) an object state estimator that learns object trajectory prediction, (ii) a catching pose quality network that learns to score and rank object poses for catching, (iii) a reaching control policy trained to move the robot hand to pre-catch poses, and (iv) a grasping control policy trained to perform soft catching motions.
We conduct extensive evaluations of our framework in simulation for each module and the integrated system, to demonstrate high success rates of in-flight
arXiv Detail & Related papers (2023-12-21T16:20:12Z) - Towards Transferring Tactile-based Continuous Force Control Policies
from Simulation to Robot [19.789369416528604]
grasp force control aims to manipulate objects safely by limiting the amount of force exerted on the object.
Prior works have either hand-modeled their force controllers, employed model-based approaches, or have not shown sim-to-real transfer.
We propose a model-free deep reinforcement learning approach trained in simulation and then transferred to the robot without further fine-tuning.
arXiv Detail & Related papers (2023-11-13T11:29:06Z) - CALM: Conditional Adversarial Latent Models for Directable Virtual
Characters [71.66218592749448]
We present Conditional Adversarial Latent Models (CALM), an approach for generating diverse and directable behaviors for user-controlled interactive virtual characters.
Using imitation learning, CALM learns a representation of movement that captures the complexity of human motion, and enables direct control over character movements.
arXiv Detail & Related papers (2023-05-02T09:01:44Z) - Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps [100.72245315180433]
We present a reconfigurable data glove design to capture different modes of human hand-object interactions.
The glove operates in three modes for various downstream tasks with distinct features.
We evaluate the system's three modes by (i) recording hand gestures and associated forces, (ii) improving manipulation fluency in VR, and (iii) producing realistic simulation effects of various tool uses.
arXiv Detail & Related papers (2023-01-14T05:35:50Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z) - Physics-Based Dexterous Manipulations with Estimated Hand Poses and
Residual Reinforcement Learning [52.37106940303246]
We learn a model that maps noisy input hand poses to target virtual poses.
The agent is trained in a residual setting by using a model-free hybrid RL+IL approach.
We test our framework in two applications that use hand pose estimates for dexterous manipulations: hand-object interactions in VR and hand-object motion reconstruction in-the-wild.
arXiv Detail & Related papers (2020-08-07T17:34:28Z) - Learning Compliance Adaptation in Contact-Rich Manipulation [81.40695846555955]
We propose a novel approach for learning predictive models of force profiles required for contact-rich tasks.
The approach combines an anomaly detection based on Bidirectional Gated Recurrent Units (Bi-GRU) and an adaptive force/impedance controller.
arXiv Detail & Related papers (2020-05-01T05:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.