GarmentTracking: Category-Level Garment Pose Tracking
- URL: http://arxiv.org/abs/2303.13913v1
- Date: Fri, 24 Mar 2023 10:59:17 GMT
- Title: GarmentTracking: Category-Level Garment Pose Tracking
- Authors: Han Xue, Wenqiang Xu, Jieyi Zhang, Tutian Tang, Yutong Li, Wenxin Du,
Ruolin Ye, Cewu Lu
- Abstract summary: We present a complete package to address the category-level garment pose tracking task.
A recording system VR-Garment, with which users can manipulate virtual garment models in simulation through a VR interface.
A large-scale dataset VR-Folding, with complex garment pose configurations in manipulation like flattening and folding.
An end-to-end online tracking framework GarmentTracking, which predicts complete garment pose both in canonical space and task space given a point cloud sequence.
- Score: 36.58359952084771
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Garments are important to humans. A visual system that can estimate and track
the complete garment pose can be useful for many downstream tasks and
real-world applications. In this work, we present a complete package to address
the category-level garment pose tracking task: (1) A recording system
VR-Garment, with which users can manipulate virtual garment models in
simulation through a VR interface. (2) A large-scale dataset VR-Folding, with
complex garment pose configurations in manipulation like flattening and
folding. (3) An end-to-end online tracking framework GarmentTracking, which
predicts complete garment pose both in canonical space and task space given a
point cloud sequence. Extensive experiments demonstrate that the proposed
GarmentTracking achieves great performance even when the garment has large
non-rigid deformation. It outperforms the baseline approach on both speed and
accuracy. We hope our proposed solution can serve as a platform for future
research. Codes and datasets are available in
https://garment-tracking.robotflow.ai.
Related papers
- DexGarmentLab: Dexterous Garment Manipulation Environment with Generalizable Policy [74.9519138296936]
Garment manipulation is a critical challenge due to the diversity in garment categories, geometries, and deformations.<n>We propose DexGarmentLab, the first environment specifically designed for dexterous (especially bimanual) garment manipulation.<n>It features large-scale high-quality 3D assets for 15 task scenarios, and refines simulation techniques tailored for garment modeling to reduce the sim-to-real gap.
arXiv Detail & Related papers (2025-05-16T09:26:59Z) - GraphGarment: Learning Garment Dynamics for Bimanual Cloth Manipulation Tasks [7.4467523788133585]
GraphGarment is a novel approach that models garment dynamics based on robot control inputs.
We use graphs to represent the interactions between the robot end-effector and the garment.
We conduct four experiments using six types of garments to validate our approach in both simulation and real-world settings.
arXiv Detail & Related papers (2025-03-04T17:35:48Z) - Gaussian Garments: Reconstructing Simulation-Ready Clothing with Photorealistic Appearance from Multi-View Video [66.98046635045685]
We introduce a novel approach for reconstructing realistic simulation-ready garment assets from multi-view videos.
Our method represents garments with a combination of a 3D mesh and a Gaussian texture that encodes both the color and high-frequency surface details.
This representation enables accurate registration of garment geometries to multi-view videos and helps disentangle albedo textures from lighting effects.
arXiv Detail & Related papers (2024-09-12T16:26:47Z) - IMAGDressing-v1: Customizable Virtual Dressing [58.44155202253754]
IMAGDressing-v1 is a virtual dressing task that generates freely editable human images with fixed garments and optional conditions.
IMAGDressing-v1 incorporates a garment UNet that captures semantic features from CLIP and texture features from VAE.
We present a hybrid attention module, including a frozen self-attention and a trainable cross-attention, to integrate garment features from the garment UNet into a frozen denoising UNet.
arXiv Detail & Related papers (2024-07-17T16:26:30Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - UmeTrack: Unified multi-view end-to-end hand tracking for VR [34.352638006495326]
Real-time tracking of 3D hand pose in world space is a challenging problem and plays an important role in VR interaction.
We present a unified end-to-end differentiable framework for multi-view, multi-frame hand tracking that directly predicts 3D hand pose in world space.
arXiv Detail & Related papers (2022-10-31T19:09:21Z) - Motion Guided Deep Dynamic 3D Garments [45.711340917768766]
We focus on motion guided dynamic 3D garments, especially for loose garments.
In a data-driven setup, we first learn a generative space of plausible garment geometries.
We show improvements over multiple state-of-the-art alternatives.
arXiv Detail & Related papers (2022-09-23T07:17:46Z) - Garment Avatars: Realistic Cloth Driving using Pattern Registration [39.936812232884954]
We propose an end-to-end pipeline for building drivable representations for clothing.
A Garment Avatar is an expressive and fully-drivable geometry model for a piece of clothing.
We demonstrate the efficacy of our pipeline on a realistic virtual telepresence application.
arXiv Detail & Related papers (2022-06-07T15:06:55Z) - Towards Scalable Unpaired Virtual Try-On via Patch-Routed
Spatially-Adaptive GAN [66.3650689395967]
We propose a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on.
To disentangle the style and spatial information of each garment, PASTA-GAN consists of an innovative patch-routed disentanglement module.
arXiv Detail & Related papers (2021-11-20T08:36:12Z) - GarmentNets: Category-Level Pose Estimation for Garments via Canonical
Space Shape Completion [24.964867275360263]
GarmentNets is a deformable object pose estimation problem as a shape completion task in the canonical space.
The output representation describes the garment's full configuration using a complete 3D mesh with the per-vertex canonical coordinate label.
Experiments demonstrate that GarmentNets is able to generalize to unseen garment instances and achieve significantly better performance compared to alternative approaches.
arXiv Detail & Related papers (2021-04-12T03:18:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.