StyleUV: Diverse and High-fidelity UV Map Generative Model
- URL: http://arxiv.org/abs/2011.12893v1
- Date: Wed, 25 Nov 2020 17:19:44 GMT
- Title: StyleUV: Diverse and High-fidelity UV Map Generative Model
- Authors: Myunggi Lee, Wonwoong Cho, Moonheum Kim, David Inouye, Nojun Kwak
- Abstract summary: We present a novel UV map generative model that learns to generate diverse and realistic synthetic UV maps without requiring high-quality UV maps for training.
Both quantitative and qualitative evaluations demonstrate that our proposed texture model produces more diverse and higher fidelity textures compared to existing methods.
- Score: 24.982824840625216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstructing 3D human faces in the wild with the 3D Morphable Model (3DMM)
has become popular in recent years. While most prior work focuses on estimating
more robust and accurate geometry, relatively little attention has been paid to
improving the quality of the texture model. Meanwhile, with the advent of
Generative Adversarial Networks (GANs), there has been great progress in
reconstructing realistic 2D images. Recent work demonstrates that GANs trained
with abundant high-quality UV maps can produce high-fidelity textures superior
to those produced by existing methods. However, acquiring such high-quality UV
maps is difficult because they are expensive to acquire, requiring laborious
processes to refine. In this work, we present a novel UV map generative model
that learns to generate diverse and realistic synthetic UV maps without
requiring high-quality UV maps for training. Our proposed framework can be
trained solely with in-the-wild images (i.e., UV maps are not required) by
leveraging a combination of GANs and a differentiable renderer. Both
quantitative and qualitative evaluations demonstrate that our proposed texture
model produces more diverse and higher fidelity textures compared to existing
methods.
Related papers
- UVMap-ID: A Controllable and Personalized UV Map Generative Model [67.71022515856653]
We introduce UVMap-ID, a controllable and personalized UV Map generative model.
Unlike traditional large-scale training methods in 2D, we propose to fine-tune a pre-trained text-to-image diffusion model.
Both quantitative and qualitative analyses demonstrate the effectiveness of our method in controllable and personalized UV Map generation.
arXiv Detail & Related papers (2024-04-22T20:30:45Z) - Nuvo: Neural UV Mapping for Unruly 3D Representations [61.87715912587394]
Existing UV mapping algorithms operate on geometry produced by state-of-the-art 3D reconstruction and generation techniques.
We present a UV mapping method designed to operate on geometry produced by 3D reconstruction and generation techniques.
arXiv Detail & Related papers (2023-12-11T18:58:38Z) - Breathing New Life into 3D Assets with Generative Repainting [74.80184575267106]
Diffusion-based text-to-image models ignited immense attention from the vision community, artists, and content creators.
Recent works have proposed various pipelines powered by the entanglement of diffusion models and neural fields.
We explore the power of pretrained 2D diffusion models and standard 3D neural radiance fields as independent, standalone tools.
Our pipeline accepts any legacy renderable geometry, such as textured or untextured meshes, and orchestrates the interaction between 2D generative refinement and 3D consistency enforcement tools.
arXiv Detail & Related papers (2023-09-15T16:34:51Z) - Texture Generation on 3D Meshes with Point-UV Diffusion [86.69672057856243]
We present Point-UV diffusion, a coarse-to-fine pipeline that marries the denoising diffusion model with UV mapping to generate high-quality texture images in UV space.
Our method can process meshes of any genus, generating diversified, geometry-compatible, and high-fidelity textures.
arXiv Detail & Related papers (2023-08-21T06:20:54Z) - FFHQ-UV: Normalized Facial UV-Texture Dataset for 3D Face Reconstruction [46.3392612457273]
This dataset contains over 50,000 high-quality texture UV-maps with even illuminations, neutral expressions, and cleaned facial regions.
Our pipeline utilizes the recent advances in StyleGAN-based facial image editing approaches.
Experiments show that our method improves the reconstruction accuracy over state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-25T03:21:05Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - Weakly-Supervised Photo-realistic Texture Generation for 3D Face
Reconstruction [48.952656891182826]
High-fidelity 3D face texture generation has yet to be studied.
Model consists of a UV sampler and a UV generator.
Training is based on pseudo ground truth blended by the 3DMM texture and the input face texture.
arXiv Detail & Related papers (2021-06-14T12:34:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.