GarmentNets: Category-Level Pose Estimation for Garments via Canonical
Space Shape Completion
- URL: http://arxiv.org/abs/2104.05177v1
- Date: Mon, 12 Apr 2021 03:18:00 GMT
- Title: GarmentNets: Category-Level Pose Estimation for Garments via Canonical
Space Shape Completion
- Authors: Cheng Chi and Shuran Song
- Abstract summary: GarmentNets is a deformable object pose estimation problem as a shape completion task in the canonical space.
The output representation describes the garment's full configuration using a complete 3D mesh with the per-vertex canonical coordinate label.
Experiments demonstrate that GarmentNets is able to generalize to unseen garment instances and achieve significantly better performance compared to alternative approaches.
- Score: 24.964867275360263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper tackles the task of category-level pose estimation for garments.
With a near infinite degree of freedom, a garment's full configuration (i.e.,
poses) is often described by the per-vertex 3D locations of its entire 3D
surface. However, garments are also commonly subject to extreme cases of
self-occlusion, especially when folded or crumpled, making it challenging to
perceive their full 3D surface. To address these challenges, we propose
GarmentNets, where the key idea is to formulate the deformable object pose
estimation problem as a shape completion task in the canonical space. This
canonical space is defined across garments instances within a category,
therefore, specifies the shared category-level pose. By mapping the observed
partial surface to the canonical space and completing it in this space, the
output representation describes the garment's full configuration using a
complete 3D mesh with the per-vertex canonical coordinate label. To properly
handle the thin 3D structure presented on garments, we proposed a novel 3D
shape representation using the generalized winding number field. Experiments
demonstrate that GarmentNets is able to generalize to unseen garment instances
and achieve significantly better performance compared to alternative
approaches.
Related papers
- Point cloud segmentation for 3D Clothed Human Layering [1.0074626918268836]
3D Cloth modeling and simulation is essential for avatars creation in several fields, such as fashion, entertainment, and animation.<n>We propose a new 3D point cloud segmentation paradigm where each 3D point can be simultaneously associated to different layers.<n>We create a new synthetic dataset that simulates very realistic 3D scans with the ground truth of the involved clothing layers.
arXiv Detail & Related papers (2025-08-07T16:02:15Z) - Incorporating Visual Correspondence into Diffusion Model for Virtual Try-On [89.9123806553489]
Diffusion models have shown success in virtual try-on (VTON) task.<n>The problem remains challenging to preserve the shape and every detail of the given garment due to the intrinsicity of diffusion model.<n>We propose to explicitly capitalize on visual correspondence as the prior to tame diffusion process.
arXiv Detail & Related papers (2025-05-22T17:52:13Z) - CrossVTON: Mimicking the Logic Reasoning on Cross-category Virtual Try-on guided by Tri-zone Priors [63.95051258676488]
CrossVTON is a framework for generating robust fitting images for cross-category virtual try-on.
It disentangles the complex reasoning required for cross-category try-on into a structured framework.
It achieves state-of-the-art performance, surpassing existing baselines in both qualitative and quantitative evaluations.
arXiv Detail & Related papers (2025-02-20T09:05:35Z) - FitDiT: Advancing the Authentic Garment Details for High-fidelity Virtual Try-on [73.13242624924814]
Garment perception enhancement technique, FitDiT, is designed for high-fidelity virtual try-on using Diffusion Transformers (DiT)
We introduce a garment texture extractor that incorporates garment priors evolution to fine-tune garment feature, facilitating to better capture rich details such as stripes, patterns, and text.
We also employ a dilated-relaxed mask strategy that adapts to the correct length of garments, preventing the generation of garments that fill the entire mask area during cross-category try-on.
arXiv Detail & Related papers (2024-11-15T11:02:23Z) - ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns [57.176642106425895]
We introduce a garment representation model that addresses limitations of current approaches.
It is faster and yields higher quality reconstructions than purely implicit surface representations.
It supports rapid editing of garment shapes and texture by modifying individual 2D panels.
arXiv Detail & Related papers (2023-05-23T14:23:48Z) - GarmentTracking: Category-Level Garment Pose Tracking [36.58359952084771]
We present a complete package to address the category-level garment pose tracking task.
A recording system VR-Garment, with which users can manipulate virtual garment models in simulation through a VR interface.
A large-scale dataset VR-Folding, with complex garment pose configurations in manipulation like flattening and folding.
An end-to-end online tracking framework GarmentTracking, which predicts complete garment pose both in canonical space and task space given a point cloud sequence.
arXiv Detail & Related papers (2023-03-24T10:59:17Z) - Structure-Preserving 3D Garment Modeling with Neural Sewing Machines [190.70647799442565]
We propose a novel Neural Sewing Machine (NSM), a learning-based framework for structure-preserving 3D garment modeling.
NSM is capable of representing 3D garments under diverse garment shapes and topologies, realistically reconstructing 3D garments from 2D images with the preserved structure, and accurately manipulating the 3D garment categories, shapes, and topologies.
arXiv Detail & Related papers (2022-11-12T16:43:29Z) - NeuralTailor: Reconstructing Sewing Pattern Structures from 3D Point
Clouds of Garments [7.331799534004012]
We propose to use a garment sewing pattern to facilitate the intrinsic garment shape estimation.
We introduce NeuralTailor, a novel architecture based on point-level attention for set regression with variable cardinality.
Our experiments show that NeuralTailor successfully reconstructs sewing patterns and generalizes to garment types with pattern topologies unseen during training.
arXiv Detail & Related papers (2022-01-31T08:33:49Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - Generating Datasets of 3D Garments with Sewing Patterns [10.729374293332281]
We create the first large-scale synthetic dataset of 3D garment models with their sewing patterns.
The dataset contains more than 20000 garment design variations produced from 19 different base types.
arXiv Detail & Related papers (2021-09-12T23:03:48Z) - Learning Anchored Unsigned Distance Functions with Gradient Direction
Alignment for Single-view Garment Reconstruction [92.23666036481399]
We propose a novel learnable Anchored Unsigned Distance Function (AnchorUDF) representation for 3D garment reconstruction from a single image.
AnchorUDF represents 3D shapes by predicting unsigned distance fields (UDFs) to enable open garment surface modeling at arbitrary resolution.
arXiv Detail & Related papers (2021-08-19T03:45:38Z) - Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On [9.293488420613148]
We present a learning-based approach for virtual try-on applications based on a fully convolutional graph neural network.
In contrast to existing data-driven models, which are trained for a specific garment or mesh topology, our fully convolutional model can cope with a large family of garments.
Under the hood, our novel geometric deep learning approach learns to drape 3D garments by decoupling the three different sources of deformations that condition the fit of clothing.
arXiv Detail & Related papers (2020-09-09T22:38:03Z) - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images [50.34202789543989]
Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
arXiv Detail & Related papers (2020-03-28T09:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.