AvatarTex: High-Fidelity Facial Texture Reconstruction from Single-Image Stylized Avatars
- URL: http://arxiv.org/abs/2511.06721v1
- Date: Mon, 10 Nov 2025 05:31:15 GMT
- Title: AvatarTex: High-Fidelity Facial Texture Reconstruction from Single-Image Stylized Avatars
- Authors: Yuda Qiu, Zitong Xiao, Yiwei Zuo, Zisheng Ye, Weikai Chen, Xiaoguang Han,
- Abstract summary: AvatarTex is a facial texture reconstruction framework capable of generating both stylized and photorealistic textures from a single image.<n>Our three-stage pipeline first completes missing texture regions via diffusion-based inpainting, refines style and structure consistency using GAN-based latent optimization, and enhances fine details through diffusion-based repainting.<n>To address the need for a stylized texture dataset, we introduce TexHub, a high-resolution collection of 20,000 multi-style UV textures with precise UV-aligned layouts.
- Score: 16.44346662761451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present AvatarTex, a high-fidelity facial texture reconstruction framework capable of generating both stylized and photorealistic textures from a single image. Existing methods struggle with stylized avatars due to the lack of diverse multi-style datasets and challenges in maintaining geometric consistency in non-standard textures. To address these limitations, AvatarTex introduces a novel three-stage diffusion-to-GAN pipeline. Our key insight is that while diffusion models excel at generating diversified textures, they lack explicit UV constraints, whereas GANs provide a well-structured latent space that ensures style and topology consistency. By integrating these strengths, AvatarTex achieves high-quality topology-aligned texture synthesis with both artistic and geometric coherence. Specifically, our three-stage pipeline first completes missing texture regions via diffusion-based inpainting, refines style and structure consistency using GAN-based latent optimization, and enhances fine details through diffusion-based repainting. To address the need for a stylized texture dataset, we introduce TexHub, a high-resolution collection of 20,000 multi-style UV textures with precise UV-aligned layouts. By leveraging TexHub and our structured diffusion-to-GAN pipeline, AvatarTex establishes a new state-of-the-art in multi-style facial texture reconstruction. TexHub will be released upon publication to facilitate future research in this field.
Related papers
- TexSpot: 3D Texture Enhancement with Spatially-uniform Point Latent Representation [47.87566902467006]
We introduce TexSpot, a diffusion-based texture enhancement framework.<n>At its core is Texlet, a novel 3D texture representation.<n>A cascaded 3D-to-2D decoder reconstructs high-quality texture patches.
arXiv Detail & Related papers (2026-02-12T16:37:31Z) - LaFiTe: A Generative Latent Field for 3D Native Texturing [72.05710323154288]
Existing native approaches are sparse by the absence of a powerful and versatile representation, which severely limits the fidelity and generality of their generated textures.<n>We introduce LaFiTe, which generates high-quality textures constrained by a sparse color representation and UV parameterization.
arXiv Detail & Related papers (2025-12-04T13:33:49Z) - NaTex: Seamless Texture Generation as Latent Color Diffusion [23.99275629136662]
We present NaTex, a native texture generation framework that predicts texture color directly in 3D space.<n>NaTex avoids several inherent limitations of the MVD pipeline.
arXiv Detail & Related papers (2025-11-20T12:47:22Z) - DiffTex: Differentiable Texturing for Architectural Proxy Models [63.370581207280004]
We propose an automated method for generating realistic texture maps for architectural proxy models at the texel level from unordered photographs.<n>Our approach establishes correspondences between texels on a UV map and pixels in the input images, with each texel's color computed as a weighted blend of associated pixel values.
arXiv Detail & Related papers (2025-09-27T14:39:53Z) - SeqTex: Generate Mesh Textures in Video Sequence [62.766839821764144]
We introduce SeqTex, a novel end-to-end framework for training 3D texture generative models.<n>We show that SeqTex achieves state-of-the-art performance on both image-conditioned and text-conditioned 3D texture generation tasks.
arXiv Detail & Related papers (2025-07-06T07:58:36Z) - FlexPainter: Flexible and Multi-View Consistent Texture Generation [15.727635740684157]
textbfFlexPainter is a novel texture generation pipeline that enables flexible multi-modal conditional guidance.<n>Our framework significantly outperforms state-of-the-art methods in both flexibility and generation quality.
arXiv Detail & Related papers (2025-06-03T08:36:03Z) - UniTEX: Universal High Fidelity Generative Texturing for 3D Shapes [35.667175445637604]
We present UniTEX, a novel two-stage 3D texture generation framework.<n>UniTEX achieves superior visual quality and texture integrity compared to existing approaches.
arXiv Detail & Related papers (2025-05-29T08:58:41Z) - PacTure: Efficient PBR Texture Generation on Packed Views with Visual Autoregressive Models [73.4445896872942]
PacTure is a framework for generating physically-based rendering (PBR) material textures from an un-domain 3D mesh.<n>We introduce view packing, a novel technique that increases the effective resolution for each view.
arXiv Detail & Related papers (2025-05-28T14:23:30Z) - TEXGen: a Generative Diffusion Model for Mesh Textures [63.43159148394021]
We focus on the fundamental problem of learning in the UV texture space itself.
We propose a scalable network architecture that interleaves convolutions on UV maps with attention layers on point clouds.
We train a 700 million parameter diffusion model that can generate UV texture maps guided by text prompts and single-view images.
arXiv Detail & Related papers (2024-11-22T05:22:11Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - TwinTex: Geometry-aware Texture Generation for Abstracted 3D
Architectural Models [13.248386665044087]
We present TwinTex, the first automatic texture mapping framework to generate a photo-realistic texture for a piece-wise planar proxy.
Our approach surpasses state-of-the-art texture mapping methods in terms of high-fidelity quality and reaches a human-expert production level with much less effort.
arXiv Detail & Related papers (2023-09-20T12:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.