Dress-Me-Up: A Dataset & Method for Self-Supervised 3D Garment
Retargeting
- URL: http://arxiv.org/abs/2401.03108v1
- Date: Sat, 6 Jan 2024 02:28:25 GMT
- Title: Dress-Me-Up: A Dataset & Method for Self-Supervised 3D Garment
Retargeting
- Authors: Shanthika Naik, Kunwar Singh, Astitva Srivastava, Dhawal Sirikonda,
Amit Raj, Varun Jampani, Avinash Sharma
- Abstract summary: We propose a novel framework for non-parametricized 3D garments onto 3D human avatars of arbitrary shapes and poses.
Existing self-supervised 3D methods only support parametric and canonical garments.
We show superior quality on non-supervised garments and human avatars over existing state-of-the-art methods.
- Score: 28.892029042436626
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose a novel self-supervised framework for retargeting
non-parameterized 3D garments onto 3D human avatars of arbitrary shapes and
poses, enabling 3D virtual try-on (VTON). Existing self-supervised 3D
retargeting methods only support parametric and canonical garments, which can
only be draped over parametric body, e.g. SMPL. To facilitate the
non-parametric garments and body, we propose a novel method that introduces
Isomap Embedding based correspondences matching between the garment and the
human body to get a coarse alignment between the two meshes. We perform neural
refinement of the coarse alignment in a self-supervised setting. Further, we
leverage a Laplacian detail integration method for preserving the inherent
details of the input garment. For evaluating our 3D non-parametric garment
retargeting framework, we propose a dataset of 255 real-world garments with
realistic noise and topological deformations. The dataset contains $44$ unique
garments worn by 15 different subjects in 5 distinctive poses, captured using a
multi-view RGBD capture setup. We show superior retargeting quality on
non-parametric garments and human avatars over existing state-of-the-art
methods, acting as the first-ever baseline on the proposed dataset for
non-parametric 3D garment retargeting.
Related papers
- IPVTON: Image-based 3D Virtual Try-on with Image Prompt Adapter [64.03091978606952]
Given a pair of images depicting a person and a garment separately, image-based 3D virtual try-on methods aim to reconstruct a 3D human model.
We present IPVTON, a novel image-based 3D virtual try-on framework.
arXiv Detail & Related papers (2025-01-26T17:51:03Z) - L3D-Pose: Lifting Pose for 3D Avatars from a Single Camera in the Wild [15.174438063000453]
3D pose estimation provides a more comprehensive solution by incorporating depth, yet creating 3D pose datasets for animals is challenging due to their dynamic and unpredictable behaviours in natural settings.
We propose a framework with systematically synthesized datasets for lifting poses from 2D to 3D and then utilize this to re-target motion from wild settings onto arbitrary avatars.
arXiv Detail & Related papers (2025-01-02T10:04:12Z) - Garment3DGen: 3D Garment Stylization and Texture Generation [11.836357439129301]
Garment3DGen is a new method to synthesize 3D garment assets from a base mesh given a single input image as guidance.
We leverage the recent progress of image-to-3D diffusion methods to generate 3D garment geometries.
We generate high-fidelity texture maps that are globally and locally consistent and faithfully capture the input guidance.
arXiv Detail & Related papers (2024-03-27T17:59:33Z) - Non-Local Latent Relation Distillation for Self-Adaptive 3D Human Pose
Estimation [63.199549837604444]
3D human pose estimation approaches leverage different forms of strong (2D/3D pose) or weak (multi-view or depth) paired supervision.
We cast 3D pose learning as a self-supervised adaptation problem that aims to transfer the task knowledge from a labeled source domain to a completely unpaired target.
We evaluate different self-adaptation settings and demonstrate state-of-the-art 3D human pose estimation performance on standard benchmarks.
arXiv Detail & Related papers (2022-04-05T03:52:57Z) - Garment4D: Garment Reconstruction from Point Cloud Sequences [12.86951061306046]
Learning to reconstruct 3D garments is important for dressing 3D human bodies of different shapes in different poses.
Previous works typically rely on 2D images as input, which however suffer from the scale and pose ambiguities.
We propose a principled framework, Garment4D, that uses 3D point cloud sequences of dressed humans for garment reconstruction.
arXiv Detail & Related papers (2021-12-08T08:15:20Z) - Towards Scalable Unpaired Virtual Try-On via Patch-Routed
Spatially-Adaptive GAN [66.3650689395967]
We propose a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on.
To disentangle the style and spatial information of each garment, PASTA-GAN consists of an innovative patch-routed disentanglement module.
arXiv Detail & Related papers (2021-11-20T08:36:12Z) - Self-Supervised Collision Handling via Generative 3D Garment Models for
Virtual Try-On [29.458328272854107]
We propose a new generative model for 3D garment deformations that enables us to learn, for the first time, a data-driven method for virtual try-on.
We show that our method is the first to successfully address garment-body contact in unseen body shapes and motions, without compromising realism and detail.
arXiv Detail & Related papers (2021-05-13T17:58:20Z) - Neural 3D Clothes Retargeting from a Single Image [91.5030622330039]
We present a method of clothes; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image.
The problem is fundamentally ill-posed as attaining the ground truth data is impossible, i.e. images of people wearing the different 3D clothing template model model at exact same pose.
We propose a semi-supervised learning framework that validates the physical plausibility of 3D deformation by matching with the prescribed body-to-cloth contact points and clothing to fit onto the unlabeled silhouette.
arXiv Detail & Related papers (2021-01-29T20:50:34Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z) - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images [50.34202789543989]
Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
arXiv Detail & Related papers (2020-03-28T09:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.