LaFiTe: A Generative Latent Field for 3D Native Texturing
- URL: http://arxiv.org/abs/2512.04786v1
- Date: Thu, 04 Dec 2025 13:33:49 GMT
- Title: LaFiTe: A Generative Latent Field for 3D Native Texturing
- Authors: Chia-Hao Chen, Zi-Xin Zou, Yan-Pei Cao, Ze Yuan, Guan Luo, Xiaojuan Qi, Ding Liang, Song-Hai Zhang, Yuan-Chen Guo,
- Abstract summary: Existing native approaches are sparse by the absence of a powerful and versatile representation, which severely limits the fidelity and generality of their generated textures.<n>We introduce LaFiTe, which generates high-quality textures constrained by a sparse color representation and UV parameterization.
- Score: 72.05710323154288
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating high-fidelity, seamless textures directly on 3D surfaces, what we term 3D-native texturing, remains a fundamental open challenge, with the potential to overcome long-standing limitations of UV-based and multi-view projection methods. However, existing native approaches are constrained by the absence of a powerful and versatile latent representation, which severely limits the fidelity and generality of their generated textures. We identify this representation gap as the principal barrier to further progress. We introduce LaFiTe, a framework that addresses this challenge by learning to generate textures as a 3D generative sparse latent color field. At its core, LaFiTe employs a variational autoencoder (VAE) to encode complex surface appearance into a sparse, structured latent space, which is subsequently decoded into a continuous color field. This representation achieves unprecedented fidelity, exceeding state-of-the-art methods by >10 dB PSNR in reconstruction, by effectively disentangling texture appearance from mesh topology and UV parameterization. Building upon this strong representation, a conditional rectified-flow model synthesizes high-quality, coherent textures across diverse styles and geometries. Extensive experiments demonstrate that LaFiTe not only sets a new benchmark for 3D-native texturing but also enables flexible downstream applications such as material synthesis and texture super-resolution, paving the way for the next generation of 3D content creation workflows.
Related papers
- TexSpot: 3D Texture Enhancement with Spatially-uniform Point Latent Representation [47.87566902467006]
We introduce TexSpot, a diffusion-based texture enhancement framework.<n>At its core is Texlet, a novel 3D texture representation.<n>A cascaded 3D-to-2D decoder reconstructs high-quality texture patches.
arXiv Detail & Related papers (2026-02-12T16:37:31Z) - TEXTRIX: Latent Attribute Grid for Native Texture Generation and Beyond [42.93031959503468]
TEXTRIX is a native 3D attribute generation framework for high-fidelity texture synthesis and downstream applications.<n>Our approach constructs a latent 3D attribute grid and leverages a Diffusion Transformer equipped with sparse attention.<n>Built upon this native representation, the framework naturally extends to high-precision 3D segmentation by training the same architecture to predict semantic attributes on the grid.
arXiv Detail & Related papers (2025-12-02T18:18:20Z) - A Scalable Attention-Based Approach for Image-to-3D Texture Mapping [3.8476192001237597]
High-quality textures are critical for realistic 3D content creation.<n>Existing generative methods are slow, rely on UV maps, and often fail to remain faithful to a reference image.<n>We propose a transformer-based framework that predicts a 3D texture field directly from a single image and a mesh.
arXiv Detail & Related papers (2025-09-05T14:18:52Z) - SeqTex: Generate Mesh Textures in Video Sequence [62.766839821764144]
We introduce SeqTex, a novel end-to-end framework for training 3D texture generative models.<n>We show that SeqTex achieves state-of-the-art performance on both image-conditioned and text-conditioned 3D texture generation tasks.
arXiv Detail & Related papers (2025-07-06T07:58:36Z) - UniTEX: Universal High Fidelity Generative Texturing for 3D Shapes [35.667175445637604]
We present UniTEX, a novel two-stage 3D texture generation framework.<n>UniTEX achieves superior visual quality and texture integrity compared to existing approaches.
arXiv Detail & Related papers (2025-05-29T08:58:41Z) - RomanTex: Decoupling 3D-aware Rotary Positional Embedded Multi-Attention Network for Texture Synthesis [10.350576861948952]
RomanTex is a multiview-based texture generation framework that integrates a multi-attention network with an underlying 3D representation.<n>Our method achieves state-of-the-art results in texture quality and consistency.
arXiv Detail & Related papers (2025-03-24T17:56:11Z) - Pandora3D: A Comprehensive Framework for High-Quality 3D Shape and Texture Generation [56.862552362223425]
This report presents a comprehensive framework for generating high-quality 3D shapes and textures from diverse input prompts.<n>The framework consists of 3D shape generation and texture generation.<n>This report details the system architecture, experimental results, and potential future directions to improve and expand the framework.
arXiv Detail & Related papers (2025-02-20T04:22:30Z) - GaussianAnything: Interactive Point Cloud Flow Matching For 3D Object Generation [75.39457097832113]
This paper introduces a novel 3D generation framework, offering scalable, high-quality 3D generation with an interactive Point Cloud-structured Latent space.<n>Our framework employs a Variational Autoencoder with multi-view posed RGB-D(epth)-N(ormal) renderings as input, using a unique latent space design that preserves 3D shape information.<n>The proposed method, GaussianAnything, supports multi-modal conditional 3D generation, allowing for point cloud, caption, and single image inputs.
arXiv Detail & Related papers (2024-11-12T18:59:32Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.