Normal-guided Garment UV Prediction for Human Re-texturing
- URL: http://arxiv.org/abs/2303.06504v1
- Date: Sat, 11 Mar 2023 22:18:18 GMT
- Title: Normal-guided Garment UV Prediction for Human Re-texturing
- Authors: Yasamin Jafarian, Tuanfeng Y. Wang, Duygu Ceylan, Jimei Yang, Nathan
Carr, Yi Zhou, Hyun Soo Park
- Abstract summary: We show that it is possible to edit dressed human images and videos without 3D reconstruction.
Our approach captures the underlying geometry of the garment in a self-supervised way.
We demonstrate that our method outperforms the state-of-the-art human UV map estimation approaches on both real and synthetic data.
- Score: 45.710312986737975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Clothes undergo complex geometric deformations, which lead to appearance
changes. To edit human videos in a physically plausible way, a texture map must
take into account not only the garment transformation induced by the body
movements and clothes fitting, but also its 3D fine-grained surface geometry.
This poses, however, a new challenge of 3D reconstruction of dynamic clothes
from an image or a video. In this paper, we show that it is possible to edit
dressed human images and videos without 3D reconstruction. We estimate a
geometry aware texture map between the garment region in an image and the
texture space, a.k.a, UV map. Our UV map is designed to preserve isometry with
respect to the underlying 3D surface by making use of the 3D surface normals
predicted from the image. Our approach captures the underlying geometry of the
garment in a self-supervised way, requiring no ground truth annotation of UV
maps and can be readily extended to predict temporally coherent UV maps. We
demonstrate that our method outperforms the state-of-the-art human UV map
estimation approaches on both real and synthetic data.
Related papers
- Texture-GS: Disentangling the Geometry and Texture for 3D Gaussian Splatting Editing [79.10630153776759]
3D Gaussian splatting, emerging as a groundbreaking approach, has drawn increasing attention for its capabilities of high-fidelity reconstruction and real-time rendering.
We propose a novel approach, namely Texture-GS, to disentangle the appearance from the geometry by representing it as a 2D texture mapped onto the 3D surface.
Our method not only facilitates high-fidelity appearance editing but also achieves real-time rendering on consumer-level devices.
arXiv Detail & Related papers (2024-03-15T06:42:55Z) - Nuvo: Neural UV Mapping for Unruly 3D Representations [61.87715912587394]
Existing UV mapping algorithms operate on geometry produced by state-of-the-art 3D reconstruction and generation techniques.
We present a UV mapping method designed to operate on geometry produced by 3D reconstruction and generation techniques.
arXiv Detail & Related papers (2023-12-11T18:58:38Z) - xCloth: Extracting Template-free Textured 3D Clothes from a Monocular
Image [4.056667956036515]
We present a novel framework for template-free textured 3D garment digitization.
More specifically, we propose to extend PeeledHuman representation to predict the pixel-aligned, layered depth and semantic maps.
We achieve high-fidelity 3D garment reconstruction results on three publicly available datasets and generalization on internet images.
arXiv Detail & Related papers (2022-08-27T05:57:00Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - Weakly-Supervised Photo-realistic Texture Generation for 3D Face
Reconstruction [48.952656891182826]
High-fidelity 3D face texture generation has yet to be studied.
Model consists of a UV sampler and a UV generator.
Training is based on pseudo ground truth blended by the 3DMM texture and the input face texture.
arXiv Detail & Related papers (2021-06-14T12:34:35Z) - Learning High Fidelity Depths of Dressed Humans by Watching Social Media
Dance Videos [21.11427729302936]
We present a new method to use the local transformation that warps the predicted local geometry of the person from an image to that of another image at a different time instant.
Our method is end-to-end trainable, resulting in high fidelity depth estimation that predicts fine geometry faithful to the input real image.
arXiv Detail & Related papers (2021-03-04T20:46:30Z) - Learning to Transfer Texture from Clothing Images to 3D Humans [50.838970996234465]
We present a method to automatically transfer textures of clothing images to 3D garments worn on top SMPL, in real time.
We first compute training pairs of images with aligned 3D garments using a custom non-rigid 3D to 2D registration method, which is accurate but slow.
Our model opens the door to applications such as virtual try-on, and allows for generation of 3D humans with varied textures which is necessary for learning.
arXiv Detail & Related papers (2020-03-04T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.