ClotheDreamer: Text-Guided Garment Generation with 3D Gaussians
- URL: http://arxiv.org/abs/2406.16815v1
- Date: Mon, 24 Jun 2024 17:25:39 GMT
- Title: ClotheDreamer: Text-Guided Garment Generation with 3D Gaussians
- Authors: Yufei Liu, Junshu Tang, Chu Zheng, Shijie Zhang, Jinkun Hao, Junwei Zhu, Dongjin Huang,
- Abstract summary: ClotheDreamer is a method for generating wearable, production-ready 3D garment assets from text prompts.
We propose a novel representation Disentangled Clothe Gaussian Splatting (DCGS) to enable separate optimization.
- Score: 13.196912161879936
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-fidelity 3D garment synthesis from text is desirable yet challenging for digital avatar creation. Recent diffusion-based approaches via Score Distillation Sampling (SDS) have enabled new possibilities but either intricately couple with human body or struggle to reuse. We introduce ClotheDreamer, a 3D Gaussian-based method for generating wearable, production-ready 3D garment assets from text prompts. We propose a novel representation Disentangled Clothe Gaussian Splatting (DCGS) to enable separate optimization. DCGS represents clothed avatar as one Gaussian model but freezes body Gaussian splats. To enhance quality and completeness, we incorporate bidirectional SDS to supervise clothed avatar and garment RGBD renderings respectively with pose conditions and propose a new pruning strategy for loose clothing. Our approach can also support custom clothing templates as input. Benefiting from our design, the synthetic 3D garment can be easily applied to virtual try-on and support physically accurate animation. Extensive experiments showcase our method's superior and competitive performance. Our project page is at https://ggxxii.github.io/clothedreamer.
Related papers
- GarmentDreamer: 3DGS Guided Garment Synthesis with Diverse Geometry and Texture Details [31.92583566128599]
Traditional 3D garment creation is labor-intensive, involving sketching, modeling, UV mapping, and time-consuming processes.
We propose GarmentDreamer, a novel method that leverages 3D Gaussian Splatting (GS) as guidance to generate 3D garment from text prompts.
arXiv Detail & Related papers (2024-05-20T23:54:28Z) - LayGA: Layered Gaussian Avatars for Animatable Clothing Transfer [40.372917698238204]
We present Layered Gaussian Avatars (LayGA), a new representation that formulates body and clothing as two separate layers.
Our representation is built upon the Gaussian map-based avatar for its excellent representation power of garment details.
In the single-layer reconstruction stage, we propose a series of geometric constraints to reconstruct smooth surfaces.
In the multi-layer fitting stage, we train two separate models to represent body and clothing and utilize the reconstructed clothing geometries as 3D supervision.
arXiv Detail & Related papers (2024-05-12T16:11:28Z) - Layered 3D Human Generation via Semantic-Aware Diffusion Model [63.459666003261276]
We propose a text-driven layered 3D human generation framework based on a novel semantic-aware diffusion model.
To keep the generated clothing consistent with the target text, we propose a semantic-confidence strategy for clothing.
To match the clothing with different body shapes, we propose a SMPL-driven implicit field deformation network.
arXiv Detail & Related papers (2023-12-10T07:34:43Z) - AvatarFusion: Zero-shot Generation of Clothing-Decoupled 3D Avatars
Using 2D Diffusion [34.609403685504944]
We present AvatarFusion, a framework for zero-shot text-to-avatar generation.
We use a latent diffusion model to provide pixel-level guidance for generating human-realistic avatars.
We also introduce a novel optimization method, called Pixel-Semantics Difference-Sampling (PS-DS), which semantically separates the generation of body and clothes.
arXiv Detail & Related papers (2023-07-13T02:19:56Z) - DrapeNet: Garment Generation and Self-Supervised Draping [95.0315186890655]
We rely on self-supervision to train a single network to drape multiple garments.
This is achieved by predicting a 3D deformation field conditioned on the latent codes of a generative network.
Our pipeline can generate and drape previously unseen garments of any topology.
arXiv Detail & Related papers (2022-11-21T09:13:53Z) - PERGAMO: Personalized 3D Garments from Monocular Video [6.8338761008826445]
PERGAMO is a data-driven approach to learn a deformable model for 3D garments from monocular images.
We first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos.
We show that our method is capable of producing garment animations that match the real-world behaviour, and generalizes to unseen body motions extracted from motion capture dataset.
arXiv Detail & Related papers (2022-10-26T21:15:54Z) - Capturing and Animation of Body and Clothing from Monocular Video [105.87228128022804]
We present SCARF, a hybrid model combining a mesh-based body with a neural radiance field.
integrating the mesh into the rendering enables us to optimize SCARF directly from monocular videos.
We demonstrate that SCARFs clothing with higher visual quality than existing methods, that the clothing deforms with changing body pose and body shape, and that clothing can be successfully transferred between avatars of different subjects.
arXiv Detail & Related papers (2022-10-04T19:34:05Z) - gDNA: Towards Generative Detailed Neural Avatars [94.9804106939663]
We show that our model is able to generate natural human avatars wearing diverse and detailed clothing.
Our method can be used on the task of fitting human models to raw scans, outperforming the previous state-of-the-art.
arXiv Detail & Related papers (2022-01-11T18:46:38Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z) - Neural 3D Clothes Retargeting from a Single Image [91.5030622330039]
We present a method of clothes; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image.
The problem is fundamentally ill-posed as attaining the ground truth data is impossible, i.e. images of people wearing the different 3D clothing template model model at exact same pose.
We propose a semi-supervised learning framework that validates the physical plausibility of 3D deformation by matching with the prescribed body-to-cloth contact points and clothing to fit onto the unlabeled silhouette.
arXiv Detail & Related papers (2021-01-29T20:50:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.