FreeUV: Ground-Truth-Free Realistic Facial UV Texture Recovery via Cross-Assembly Inference Strategy
- URL: http://arxiv.org/abs/2503.17197v1
- Date: Fri, 21 Mar 2025 14:44:22 GMT
- Title: FreeUV: Ground-Truth-Free Realistic Facial UV Texture Recovery via Cross-Assembly Inference Strategy
- Authors: Xingchao Yang, Takafumi Taketomi, Yuki Endo, Yoshihiro Kanamori,
- Abstract summary: FreeUV is a ground-truth-free UV texture recovery framework that eliminates the need for annotated or synthetic UV data.<n>Our approach captures intricate facial features and demonstrates robust performance across diverse poses.<n>FreeUV offers a scalable solution for generating high-fidelity 3D facial textures suitable for real-world scenarios.
- Score: 2.9748898344267776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recovering high-quality 3D facial textures from single-view 2D images is a challenging task, especially under constraints of limited data and complex facial details such as makeup, wrinkles, and occlusions. In this paper, we introduce FreeUV, a novel ground-truth-free UV texture recovery framework that eliminates the need for annotated or synthetic UV data. FreeUV leverages pre-trained stable diffusion model alongside a Cross-Assembly inference strategy to fulfill this objective. In FreeUV, separate networks are trained independently to focus on realistic appearance and structural consistency, and these networks are combined during inference to generate coherent textures. Our approach accurately captures intricate facial features and demonstrates robust performance across diverse poses and occlusions. Extensive experiments validate FreeUV's effectiveness, with results surpassing state-of-the-art methods in both quantitative and qualitative metrics. Additionally, FreeUV enables new applications, including local editing, facial feature interpolation, and multi-view texture recovery. By reducing data requirements, FreeUV offers a scalable solution for generating high-fidelity 3D facial textures suitable for real-world scenarios.
Related papers
- TEXGen: a Generative Diffusion Model for Mesh Textures [63.43159148394021]
We focus on the fundamental problem of learning in the UV texture space itself.
We propose a scalable network architecture that interleaves convolutions on UV maps with attention layers on point clouds.
We train a 700 million parameter diffusion model that can generate UV texture maps guided by text prompts and single-view images.
arXiv Detail & Related papers (2024-11-22T05:22:11Z) - UV-free Texture Generation with Denoising and Geodesic Heat Diffusions [50.55154348768031]
Seams, wasted UV space, and varying resolution over the surface are the most prominent issues of the standard UV-based processing mechanism of meshes.
We propose to represent textures as coloured point-cloud colours generated by a denoising diffusion model constrained to operate on the surface of 3D meshes.
arXiv Detail & Related papers (2024-08-29T17:57:05Z) - SemUV: Deep Learning based semantic manipulation over UV texture map of virtual human heads [2.3523009382090323]
We introduce SemUV: a simple and effective approach using the FFHQ-UV dataset for semantic manipulation directly within the UV texture space.
We demonstrate its superior ability to preserve identity while effectively modifying semantic features such as age, gender, and facial hair.
arXiv Detail & Related papers (2024-06-28T20:58:59Z) - Texture Generation on 3D Meshes with Point-UV Diffusion [86.69672057856243]
We present Point-UV diffusion, a coarse-to-fine pipeline that marries the denoising diffusion model with UV mapping to generate high-quality texture images in UV space.
Our method can process meshes of any genus, generating diversified, geometry-compatible, and high-fidelity textures.
arXiv Detail & Related papers (2023-08-21T06:20:54Z) - Relightify: Relightable 3D Faces from a Single Image via Diffusion
Models [86.3927548091627]
We present the first approach to use diffusion models as a prior for highly accurate 3D facial BRDF reconstruction from a single image.
In contrast to existing methods, we directly acquire the observed texture from the input image, thus, resulting in more faithful and consistent estimation.
arXiv Detail & Related papers (2023-05-10T11:57:49Z) - FFHQ-UV: Normalized Facial UV-Texture Dataset for 3D Face Reconstruction [46.3392612457273]
This dataset contains over 50,000 high-quality texture UV-maps with even illuminations, neutral expressions, and cleaned facial regions.
Our pipeline utilizes the recent advances in StyleGAN-based facial image editing approaches.
Experiments show that our method improves the reconstruction accuracy over state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-25T03:21:05Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.