Gaussian Garments: Reconstructing Simulation-Ready Clothing with Photorealistic Appearance from Multi-View Video
- URL: http://arxiv.org/abs/2409.08189v1
- Date: Thu, 12 Sep 2024 16:26:47 GMT
- Title: Gaussian Garments: Reconstructing Simulation-Ready Clothing with Photorealistic Appearance from Multi-View Video
- Authors: Boxiang Rong, Artur Grigorev, Wenbo Wang, Michael J. Black, Bernhard Thomaszewski, Christina Tsalicoglou, Otmar Hilliges,
- Abstract summary: We introduce a novel approach for reconstructing realistic simulation-ready garment assets from multi-view videos.
Our method represents garments with a combination of a 3D mesh and a Gaussian texture that encodes both the color and high-frequency surface details.
This representation enables accurate registration of garment geometries to multi-view videos and helps disentangle albedo textures from lighting effects.
- Score: 66.98046635045685
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We introduce Gaussian Garments, a novel approach for reconstructing realistic simulation-ready garment assets from multi-view videos. Our method represents garments with a combination of a 3D mesh and a Gaussian texture that encodes both the color and high-frequency surface details. This representation enables accurate registration of garment geometries to multi-view videos and helps disentangle albedo textures from lighting effects. Furthermore, we demonstrate how a pre-trained graph neural network (GNN) can be fine-tuned to replicate the real behavior of each garment. The reconstructed Gaussian Garments can be automatically combined into multi-garment outfits and animated with the fine-tuned GNN.
Related papers
- FabricDiffusion: High-Fidelity Texture Transfer for 3D Garments Generation from In-The-Wild Clothing Images [56.63824638417697]
FabricDiffusion is a method for transferring fabric textures from a single clothing image to 3D garments of arbitrary shapes.
We show that FabricDiffusion can transfer various features from a single clothing image including texture patterns, material properties, and detailed prints and logos.
arXiv Detail & Related papers (2024-10-02T17:57:12Z) - Garment Animation NeRF with Color Editing [6.357662418254495]
We propose a novel approach to synthesize garment animations from body motion sequences without the need for an explicit garment proxy.
Our approach infers garment dynamic features from body motion, providing a preliminary overview of garment structure.
We demonstrate the generalizability of our method across unseen body motions and camera views, ensuring detailed structural consistency.
arXiv Detail & Related papers (2024-07-29T08:17:05Z) - Garment3DGen: 3D Garment Stylization and Texture Generation [11.836357439129301]
Garment3DGen is a new method to synthesize 3D garment assets from a base mesh given a single input image as guidance.
We leverage the recent progress of image-to-3D diffusion methods to generate 3D garment geometries.
We generate high-fidelity texture maps that are globally and locally consistent and faithfully capture the input guidance.
arXiv Detail & Related papers (2024-03-27T17:59:33Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians [51.46168990249278]
We present an efficient approach to creating realistic human avatars with dynamic 3D appearances from a single video.
GustafAvatar is validated on both the public dataset and our collected dataset.
arXiv Detail & Related papers (2023-12-04T18:55:45Z) - High-Quality Animatable Dynamic Garment Reconstruction from Monocular
Videos [51.8323369577494]
We propose the first method to recover high-quality animatable dynamic garments from monocular videos without depending on scanned data.
To generate reasonable deformations for various unseen poses, we propose a learnable garment deformation network.
We show that our method can reconstruct high-quality dynamic garments with coherent surface details, which can be easily animated under unseen poses.
arXiv Detail & Related papers (2023-11-02T13:16:27Z) - REC-MV: REconstructing 3D Dynamic Cloth from Monocular Videos [23.25620556096607]
Reconstructing dynamic 3D garment surfaces with open boundaries from monocular videos is an important problem.
We introduce a novel approach, called REC-MV, to jointly optimize the explicit feature curves and the implicit signed distance field.
Our approach outperforms existing methods and can produce high-quality dynamic garment surfaces.
arXiv Detail & Related papers (2023-05-23T16:53:10Z) - DeepCloth: Neural Garment Representation for Shape and Style Editing [37.595804908189855]
We introduce a novel method, termed as DeepCloth, to establish a unified garment representation framework.
Our key idea is to represent garment geometry by a "UV-position map with mask"
We learn a continuous feature space mapped from the above UV space, enabling garment shape editing and transition.
arXiv Detail & Related papers (2020-11-30T08:42:38Z) - BCNet: Learning Body and Cloth Shape from A Single Image [56.486796244320125]
We propose a layered garment representation on top of SMPL and novelly make the skinning weight of garment independent of the body mesh.
Compared with existing methods, our method can support more garment categories and recover more accurate geometry.
arXiv Detail & Related papers (2020-04-01T03:41:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.