UniGarmentManip: A Unified Framework for Category-Level Garment Manipulation via Dense Visual Correspondence
- URL: http://arxiv.org/abs/2405.06903v1
- Date: Sat, 11 May 2024 04:18:41 GMT
- Title: UniGarmentManip: A Unified Framework for Category-Level Garment Manipulation via Dense Visual Correspondence
- Authors: Ruihai Wu, Haoran Lu, Yiyan Wang, Yubo Wang, Hao Dong,
- Abstract summary: Garment manipulation is essential for future robots to accomplish home-assistant tasks.
We leverage the property that, garments in a certain category have similar structures.
We then learn the topological dense (point-level) visual correspondence among garments in the category level with different deformations.
- Score: 6.9061350009929185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Garment manipulation (e.g., unfolding, folding and hanging clothes) is essential for future robots to accomplish home-assistant tasks, while highly challenging due to the diversity of garment configurations, geometries and deformations. Although able to manipulate similar shaped garments in a certain task, previous works mostly have to design different policies for different tasks, could not generalize to garments with diverse geometries, and often rely heavily on human-annotated data. In this paper, we leverage the property that, garments in a certain category have similar structures, and then learn the topological dense (point-level) visual correspondence among garments in the category level with different deformations in the self-supervised manner. The topological correspondence can be easily adapted to the functional correspondence to guide the manipulation policies for various downstream tasks, within only one or few-shot demonstrations. Experiments over garments in 3 different categories on 3 representative tasks in diverse scenarios, using one or two arms, taking one or more steps, inputting flat or messy garments, demonstrate the effectiveness of our proposed method. Project page: https://warshallrho.github.io/unigarmentmanip.
Related papers
- CLASP: General-Purpose Clothes Manipulation with Semantic Keypoints [21.09454149734247]
This paper presents CLothes mAnipulation with Semantic keyPoints (CLASP), which aims at general-purpose clothes manipulation.<n>The core idea of CLASP is semantic keypoints -- e.g., ''left sleeve'', ''right shoulder'' -- a sparse spatial-semantic representation that is salient for both perception and action.<n>CLASP uses semantic keypoints to bridge high-level task planning and low-level action execution.
arXiv Detail & Related papers (2025-07-26T15:43:25Z) - DexGarmentLab: Dexterous Garment Manipulation Environment with Generalizable Policy [74.9519138296936]
Garment manipulation is a critical challenge due to the diversity in garment categories, geometries, and deformations.<n>We propose DexGarmentLab, the first environment specifically designed for dexterous (especially bimanual) garment manipulation.<n>It features large-scale high-quality 3D assets for 15 task scenarios, and refines simulation techniques tailored for garment modeling to reduce the sim-to-real gap.
arXiv Detail & Related papers (2025-05-16T09:26:59Z) - GarmentPile: Point-Level Visual Affordance Guided Retrieval and Adaptation for Cluttered Garments Manipulation [14.604134812602044]
Unlike single-garment manipulation, cluttered scenarios require managing complex garment entanglements and interactions.
We learn point-level affordance, the dense representation modeling the complex space and multi-modal manipulation candidates.
We introduce an adaptation module, guided by learned affordance, to reorganize highly-entangled garments into states plausible for manipulation.
arXiv Detail & Related papers (2025-03-12T10:39:12Z) - Learning 3D Garment Animation from Trajectories of A Piece of Cloth [60.10847645998295]
Garment animation is ubiquitous in various applications, such as virtual reality, gaming, and film producing.
To mimic the deformations of the observed garments, data-driven methods require large scale of garment data.
In this paper, instead of using garment-wise supervised-learning we adopt a disentangled scheme to learn how to animate observed garments.
arXiv Detail & Related papers (2025-01-02T18:09:42Z) - General-purpose Clothes Manipulation with Semantic Keypoints [17.23980132793002]
Clothes manipulation is a critical skill for household robots.
Recent advancements have been made in task-specific clothes manipulation, such as folding, flattening, and hanging.
We propose identifying these specific features like left sleeve'' as semantic keypoints.
We develop a hierarchical learning framework using the large language model (LLM) for general-purpose CLothes mAnipulation with Semantic keyPoints (CLASP)
arXiv Detail & Related papers (2024-08-15T13:49:14Z) - MMTryon: Multi-Modal Multi-Reference Control for High-Quality Fashion Generation [70.83668869857665]
MMTryon is a multi-modal multi-reference VIrtual Try-ON framework.
It can generate high-quality compositional try-on results by taking a text instruction and multiple garment images as inputs.
arXiv Detail & Related papers (2024-05-01T11:04:22Z) - ClothCombo: Modeling Inter-Cloth Interaction for Draping Multi-Layered
Clothes [3.8079353598215757]
We present ClothCombo, a pipeline to drape arbitrary combinations of clothes on 3D human models.
Our method utilizes a GNN-based network to efficiently model the interaction between clothes in different layers.
arXiv Detail & Related papers (2023-04-07T06:23:54Z) - HOOD: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics [84.29846699151288]
Our method is agnostic to body shape and applies to tight-fitting garments as well as loose, free-flowing clothing.
As one key contribution, we propose a hierarchical message-passing scheme that efficiently propagates stiff stretching modes.
arXiv Detail & Related papers (2022-12-14T14:24:00Z) - DIG: Draping Implicit Garment over the Human Body [56.68349332089129]
We propose an end-to-end differentiable pipeline that represents garments using implicit surfaces and learns a skinning field conditioned on shape and pose parameters of an articulated body model.
We show that our method, thanks to its end-to-end differentiability, allows to recover body and garments parameters jointly from image observations.
arXiv Detail & Related papers (2022-09-22T08:13:59Z) - NeuralTailor: Reconstructing Sewing Pattern Structures from 3D Point
Clouds of Garments [7.331799534004012]
We propose to use a garment sewing pattern to facilitate the intrinsic garment shape estimation.
We introduce NeuralTailor, a novel architecture based on point-level attention for set regression with variable cardinality.
Our experiments show that NeuralTailor successfully reconstructs sewing patterns and generalizes to garment types with pattern topologies unseen during training.
arXiv Detail & Related papers (2022-01-31T08:33:49Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - SMPLicit: Topology-aware Generative Model for Clothed People [65.84665248796615]
We introduce SMPLicit, a novel generative model to jointly represent body pose, shape and clothing geometry.
In the experimental section, we demonstrate SMPLicit can be readily used for fitting 3D scans and for 3D reconstruction in images of dressed people.
arXiv Detail & Related papers (2021-03-11T18:57:03Z) - Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification [70.16040194572406]
Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
arXiv Detail & Related papers (2020-08-14T03:49:14Z) - GarmentGAN: Photo-realistic Adversarial Fashion Transfer [0.0]
GarmentGAN performs image-based garment transfer through generative adversarial methods.
The framework allows users to virtually try-on items before purchase and generalizes to various apparel types.
arXiv Detail & Related papers (2020-03-04T05:01:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.