DeepCloth: Neural Garment Representation for Shape and Style Editing
- URL: http://arxiv.org/abs/2011.14619v1
- Date: Mon, 30 Nov 2020 08:42:38 GMT
- Title: DeepCloth: Neural Garment Representation for Shape and Style Editing
- Authors: Zhaoqi Su and Tao Yu and Yangang Wang and Yipeng Li and Yebin Liu
- Abstract summary: We introduce a novel method, termed as DeepCloth, to establish a unified garment representation framework.
Our key idea is to represent garment geometry by a "UV-position map with mask"
We learn a continuous feature space mapped from the above UV space, enabling garment shape editing and transition.
- Score: 37.595804908189855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Garment representation, animation and editing is a challenging topic in the
area of computer vision and graphics. Existing methods cannot perform smooth
and reasonable garment transition under different shape styles and topologies.
In this work, we introduce a novel method, termed as DeepCloth, to establish a
unified garment representation framework enabling free and smooth garment style
transition. Our key idea is to represent garment geometry by a "UV-position map
with mask", which potentially allows the description of various garments with
different shapes and topologies. Furthermore, we learn a continuous feature
space mapped from the above UV space, enabling garment shape editing and
transition by controlling the garment features. Finally, we demonstrate
applications of garment animation, reconstruction and editing based on our
neural garment representation and encoding method. To conclude, with the
proposed DeepCloth, we move a step forward on establishing a more flexible and
general 3D garment digitization framework. Experiments demonstrate that our
method can achieve the state-of-the-art garment modeling results compared with
the previous methods.
Related papers
- Gaussian Garments: Reconstructing Simulation-Ready Clothing with Photorealistic Appearance from Multi-View Video [66.98046635045685]
We introduce a novel approach for reconstructing realistic simulation-ready garment assets from multi-view videos.
Our method represents garments with a combination of a 3D mesh and a Gaussian texture that encodes both the color and high-frequency surface details.
This representation enables accurate registration of garment geometries to multi-view videos and helps disentangle albedo textures from lighting effects.
arXiv Detail & Related papers (2024-09-12T16:26:47Z) - Garment Animation NeRF with Color Editing [6.357662418254495]
We propose a novel approach to synthesize garment animations from body motion sequences without the need for an explicit garment proxy.
Our approach infers garment dynamic features from body motion, providing a preliminary overview of garment structure.
We demonstrate the generalizability of our method across unseen body motions and camera views, ensuring detailed structural consistency.
arXiv Detail & Related papers (2024-07-29T08:17:05Z) - GraVITON: Graph based garment warping with attention guided inversion for Virtual-tryon [5.790630195329777]
We introduce a novel graph based warping technique which emphasizes the value of context in garment flow.
Our method, validated on VITON-HD and Dresscode datasets, showcases substantial improvement in garment warping, texture preservation, and overall realism.
arXiv Detail & Related papers (2024-06-04T10:29:18Z) - GarmentDreamer: 3DGS Guided Garment Synthesis with Diverse Geometry and Texture Details [31.92583566128599]
Traditional 3D garment creation is labor-intensive, involving sketching, modeling, UV mapping, and time-consuming processes.
We propose GarmentDreamer, a novel method that leverages 3D Gaussian Splatting (GS) as guidance to generate 3D garment from text prompts.
arXiv Detail & Related papers (2024-05-20T23:54:28Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - High-Quality Animatable Dynamic Garment Reconstruction from Monocular
Videos [51.8323369577494]
We propose the first method to recover high-quality animatable dynamic garments from monocular videos without depending on scanned data.
To generate reasonable deformations for various unseen poses, we propose a learnable garment deformation network.
We show that our method can reconstruct high-quality dynamic garments with coherent surface details, which can be easily animated under unseen poses.
arXiv Detail & Related papers (2023-11-02T13:16:27Z) - Structure-Preserving 3D Garment Modeling with Neural Sewing Machines [190.70647799442565]
We propose a novel Neural Sewing Machine (NSM), a learning-based framework for structure-preserving 3D garment modeling.
NSM is capable of representing 3D garments under diverse garment shapes and topologies, realistically reconstructing 3D garments from 2D images with the preserved structure, and accurately manipulating the 3D garment categories, shapes, and topologies.
arXiv Detail & Related papers (2022-11-12T16:43:29Z) - MonoClothCap: Towards Temporally Coherent Clothing Capture from
Monocular RGB Video [10.679773937444445]
We present a method to capture temporally coherent dynamic clothing deformation from a monocular RGB video input.
We build statistical deformation models for three types of clothing: T-shirt, short pants and long pants.
Our method produces temporally coherent reconstruction of body and clothing from monocular video.
arXiv Detail & Related papers (2020-09-22T17:54:38Z) - BCNet: Learning Body and Cloth Shape from A Single Image [56.486796244320125]
We propose a layered garment representation on top of SMPL and novelly make the skinning weight of garment independent of the body mesh.
Compared with existing methods, our method can support more garment categories and recover more accurate geometry.
arXiv Detail & Related papers (2020-04-01T03:41:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.