FFHQ-UV: Normalized Facial UV-Texture Dataset for 3D Face Reconstruction
- URL: http://arxiv.org/abs/2211.13874v2
- Date: Fri, 24 Mar 2023 14:44:50 GMT
- Title: FFHQ-UV: Normalized Facial UV-Texture Dataset for 3D Face Reconstruction
- Authors: Haoran Bai, Di Kang, Haoxian Zhang, Jinshan Pan, Linchao Bao
- Abstract summary: This dataset contains over 50,000 high-quality texture UV-maps with even illuminations, neutral expressions, and cleaned facial regions.
Our pipeline utilizes the recent advances in StyleGAN-based facial image editing approaches.
Experiments show that our method improves the reconstruction accuracy over state-of-the-art approaches.
- Score: 46.3392612457273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a large-scale facial UV-texture dataset that contains over 50,000
high-quality texture UV-maps with even illuminations, neutral expressions, and
cleaned facial regions, which are desired characteristics for rendering
realistic 3D face models under different lighting conditions. The dataset is
derived from a large-scale face image dataset namely FFHQ, with the help of our
fully automatic and robust UV-texture production pipeline. Our pipeline
utilizes the recent advances in StyleGAN-based facial image editing approaches
to generate multi-view normalized face images from single-image inputs. An
elaborated UV-texture extraction, correction, and completion procedure is then
applied to produce high-quality UV-maps from the normalized face images.
Compared with existing UV-texture datasets, our dataset has more diverse and
higher-quality texture maps. We further train a GAN-based texture decoder as
the nonlinear texture basis for parametric fitting based 3D face
reconstruction. Experiments show that our method improves the reconstruction
accuracy over state-of-the-art approaches, and more importantly, produces
high-quality texture maps that are ready for realistic renderings. The dataset,
code, and pre-trained texture decoder are publicly available at
https://github.com/csbhr/FFHQ-UV.
Related papers
- TEXGen: a Generative Diffusion Model for Mesh Textures [63.43159148394021]
We focus on the fundamental problem of learning in the UV texture space itself.
We propose a scalable network architecture that interleaves convolutions on UV maps with attention layers on point clouds.
We train a 700 million parameter diffusion model that can generate UV texture maps guided by text prompts and single-view images.
arXiv Detail & Related papers (2024-11-22T05:22:11Z) - Texture Generation on 3D Meshes with Point-UV Diffusion [86.69672057856243]
We present Point-UV diffusion, a coarse-to-fine pipeline that marries the denoising diffusion model with UV mapping to generate high-quality texture images in UV space.
Our method can process meshes of any genus, generating diversified, geometry-compatible, and high-fidelity textures.
arXiv Detail & Related papers (2023-08-21T06:20:54Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - Weakly-Supervised Photo-realistic Texture Generation for 3D Face
Reconstruction [48.952656891182826]
High-fidelity 3D face texture generation has yet to be studied.
Model consists of a UV sampler and a UV generator.
Training is based on pseudo ground truth blended by the 3DMM texture and the input face texture.
arXiv Detail & Related papers (2021-06-14T12:34:35Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - StyleUV: Diverse and High-fidelity UV Map Generative Model [24.982824840625216]
We present a novel UV map generative model that learns to generate diverse and realistic synthetic UV maps without requiring high-quality UV maps for training.
Both quantitative and qualitative evaluations demonstrate that our proposed texture model produces more diverse and higher fidelity textures compared to existing methods.
arXiv Detail & Related papers (2020-11-25T17:19:44Z) - Towards High-Fidelity 3D Face Reconstruction from In-the-Wild Images
Using Graph Convolutional Networks [32.859340851346786]
We introduce a method to reconstruct 3D facial shapes with high-fidelity textures from single-view images in-the-wild.
Our method can generate high-quality results and outperforms state-of-the-art methods in both qualitative and quantitative comparisons.
arXiv Detail & Related papers (2020-03-12T08:06:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.