GraphGarment: Learning Garment Dynamics for Bimanual Cloth Manipulation Tasks
- URL: http://arxiv.org/abs/2503.05817v2
- Date: Tue, 11 Mar 2025 00:15:22 GMT
- Title: GraphGarment: Learning Garment Dynamics for Bimanual Cloth Manipulation Tasks
- Authors: Wei Chen, Kelin Li, Dongmyoung Lee, Xiaoshuai Chen, Rui Zong, Petar Kormushev,
- Abstract summary: GraphGarment is a novel approach that models garment dynamics based on robot control inputs.<n>We use graphs to represent the interactions between the robot end-effector and the garment.<n>We conduct four experiments using six types of garments to validate our approach in both simulation and real-world settings.
- Score: 7.4467523788133585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Physical manipulation of garments is often crucial when performing fabric-related tasks, such as hanging garments. However, due to the deformable nature of fabrics, these operations remain a significant challenge for robots in household, healthcare, and industrial environments. In this paper, we propose GraphGarment, a novel approach that models garment dynamics based on robot control inputs and applies the learned dynamics model to facilitate garment manipulation tasks such as hanging. Specifically, we use graphs to represent the interactions between the robot end-effector and the garment. GraphGarment uses a graph neural network (GNN) to learn a dynamics model that can predict the next garment state given the current state and input action in simulation. To address the substantial sim-to-real gap, we propose a residual model that compensates for garment state prediction errors, thereby improving real-world performance. The garment dynamics model is then applied to a model-based action sampling strategy, where it is utilized to manipulate the garment to a reference pre-hanging configuration for garment-hanging tasks. We conducted four experiments using six types of garments to validate our approach in both simulation and real-world settings. In simulation experiments, GraphGarment achieves better garment state prediction performance, with a prediction error 0.46 cm lower than the best baseline. Our approach also demonstrates improved performance in the garment-hanging simulation experiment with enhancements of 12%, 24%, and 10%, respectively. Moreover, real-world robot experiments confirm the robustness of sim-to-real transfer, with an error increase of 0.17 cm compared to simulation results. Supplementary material is available at:https://sites.google.com/view/graphgarment.
Related papers
- DexGarmentLab: Dexterous Garment Manipulation Environment with Generalizable Policy [74.9519138296936]
Garment manipulation is a critical challenge due to the diversity in garment categories, geometries, and deformations.<n>We propose DexGarmentLab, the first environment specifically designed for dexterous (especially bimanual) garment manipulation.<n>It features large-scale high-quality 3D assets for 15 task scenarios, and refines simulation techniques tailored for garment modeling to reduce the sim-to-real gap.
arXiv Detail & Related papers (2025-05-16T09:26:59Z) - FoldNet: Learning Generalizable Closed-Loop Policy for Garment Folding via Keypoint-Driven Asset and Demonstration Synthesis [9.22657317122778]
We present a synthetic garment dataset that can be used for robotic garment folding.<n>We generate folding demonstrations in simulation and train folding policies via closed-loop imitation learning.<n> KG-DAgger significantly improves the model performance, boosting the real-world success rate by 25%.
arXiv Detail & Related papers (2025-05-14T03:34:30Z) - Learning 3D Garment Animation from Trajectories of A Piece of Cloth [60.10847645998295]
Garment animation is ubiquitous in various applications, such as virtual reality, gaming, and film producing.<n>To mimic the deformations of the observed garments, data-driven methods require large scale of garment data.<n>In this paper, instead of using garment-wise supervised-learning we adopt a disentangled scheme to learn how to animate observed garments.
arXiv Detail & Related papers (2025-01-02T18:09:42Z) - VidMan: Exploiting Implicit Dynamics from Video Diffusion Model for Effective Robot Manipulation [79.00294932026266]
VidMan is a novel framework that employs a two-stage training mechanism to enhance stability and improve data utilization efficiency.
Our framework outperforms state-of-the-art baseline model GR-1 on the CALVIN benchmark, achieving a 11.7% relative improvement, and demonstrates over 9% precision gains on the OXE small-scale dataset.
arXiv Detail & Related papers (2024-11-14T03:13:26Z) - SKT: Integrating State-Aware Keypoint Trajectories with Vision-Language Models for Robotic Garment Manipulation [82.61572106180705]
This paper presents a unified approach using vision-language models (VLMs) to improve keypoint prediction across various garment categories.
We created a large-scale synthetic dataset using advanced simulation techniques, allowing scalable training without extensive real-world data.
Experimental results indicate that the VLM-based method significantly enhances keypoint detection accuracy and task success rates.
arXiv Detail & Related papers (2024-09-26T17:26:16Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - Towards Multi-Layered 3D Garments Animation [135.77656965678196]
Existing approaches mostly focus on single-layered garments driven by only human bodies and struggle to handle general scenarios.
We propose a novel data-driven method, called LayersNet, to model garment-level animations as particle-wise interactions in a micro physics system.
Our experiments show that LayersNet achieves superior performance both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-05-17T17:53:04Z) - GarmentTracking: Category-Level Garment Pose Tracking [47.219348193140775]
We present a complete package to address the category-level garment pose tracking task.
A recording system VR-Garment, with which users can manipulate virtual garment models in simulation through a VR interface.
A large-scale dataset VR-Folding, with complex garment pose configurations in manipulation like flattening and folding.
An end-to-end online tracking framework GarmentTracking, which predicts complete garment pose both in canonical space and task space given a point cloud sequence.
arXiv Detail & Related papers (2023-03-24T10:59:17Z) - HOOD: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics [84.29846699151288]
Our method is agnostic to body shape and applies to tight-fitting garments as well as loose, free-flowing clothing.
As one key contribution, we propose a hierarchical message-passing scheme that efficiently propagates stiff stretching modes.
arXiv Detail & Related papers (2022-12-14T14:24:00Z) - Motion Guided Deep Dynamic 3D Garments [45.711340917768766]
We focus on motion guided dynamic 3D garments, especially for loose garments.
In a data-driven setup, we first learn a generative space of plausible garment geometries.
We show improvements over multiple state-of-the-art alternatives.
arXiv Detail & Related papers (2022-09-23T07:17:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.