A Scalable Attention-Based Approach for Image-to-3D Texture Mapping
- URL: http://arxiv.org/abs/2509.05131v1
- Date: Fri, 05 Sep 2025 14:18:52 GMT
- Title: A Scalable Attention-Based Approach for Image-to-3D Texture Mapping
- Authors: Arianna Rampini, Kanika Madan, Bruno Roy, AmirHossein Zamani, Derek Cheung,
- Abstract summary: High-quality textures are critical for realistic 3D content creation.<n>Existing generative methods are slow, rely on UV maps, and often fail to remain faithful to a reference image.<n>We propose a transformer-based framework that predicts a 3D texture field directly from a single image and a mesh.
- Score: 3.8476192001237597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-quality textures are critical for realistic 3D content creation, yet existing generative methods are slow, rely on UV maps, and often fail to remain faithful to a reference image. To address these challenges, we propose a transformer-based framework that predicts a 3D texture field directly from a single image and a mesh, eliminating the need for UV mapping and differentiable rendering, and enabling faster texture generation. Our method integrates a triplane representation with depth-based backprojection losses, enabling efficient training and faster inference. Once trained, it generates high-fidelity textures in a single forward pass, requiring only 0.2s per shape. Extensive qualitative, quantitative, and user preference evaluations demonstrate that our method outperforms state-of-the-art baselines on single-image texture reconstruction in terms of both fidelity to the input image and perceptual quality, highlighting its practicality for scalable, high-quality, and controllable 3D content creation.
Related papers
- LaFiTe: A Generative Latent Field for 3D Native Texturing [72.05710323154288]
Existing native approaches are sparse by the absence of a powerful and versatile representation, which severely limits the fidelity and generality of their generated textures.<n>We introduce LaFiTe, which generates high-quality textures constrained by a sparse color representation and UV parameterization.
arXiv Detail & Related papers (2025-12-04T13:33:49Z) - UniTEX: Universal High Fidelity Generative Texturing for 3D Shapes [35.667175445637604]
We present UniTEX, a novel two-stage 3D texture generation framework.<n>UniTEX achieves superior visual quality and texture integrity compared to existing approaches.
arXiv Detail & Related papers (2025-05-29T08:58:41Z) - TriTex: Learning Texture from a Single Mesh via Triplane Semantic Features [78.13246375582906]
We present a novel approach that learns a volumetric texture field from a single textured mesh by mapping semantic features to surface target colors.<n>Our approach achieves superior texture quality across 3D models in applications like game development.
arXiv Detail & Related papers (2025-03-20T18:35:03Z) - Make-A-Texture: Fast Shape-Aware Texture Generation in 3 Seconds [11.238020531599405]
We present Make-A-Texture, a new framework that efficiently synthesizes high-resolution texture maps from textual prompts for given 3D geometries.<n>A significant feature of our method is its remarkable efficiency, achieving a full texture generation within an end-to-end runtime of just 3.07 seconds on a single NVIDIA H100 GPU.<n>Our work significantly improves the applicability and practicality of texture generation models for real-world 3D content creation, including interactive creation and text-guided texture editing.
arXiv Detail & Related papers (2024-12-10T18:58:29Z) - Meta 3D TextureGen: Fast and Consistent Texture Generation for 3D Objects [54.80813150893719]
We introduce Meta 3D TextureGen: a new feedforward method comprised of two sequential networks aimed at generating high-quality textures in less than 20 seconds.
Our method state-of-the-art results in quality and speed by conditioning a text-to-image model on 3D semantics in 2D space and fusing them into a complete and high-resolution UV texture map.
In addition, we introduce a texture enhancement network that is capable of up-scaling any texture by an arbitrary ratio, producing 4k pixel resolution textures.
arXiv Detail & Related papers (2024-07-02T17:04:34Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - HiFiHR: Enhancing 3D Hand Reconstruction from a Single Image via
High-Fidelity Texture [40.012406098563204]
We present HiFiHR, a high-fidelity hand reconstruction approach that utilizes render-and-compare in the learning-based framework from a single image.
Experimental results on public benchmarks including FreiHAND and HO-3D demonstrate that our method outperforms the state-of-the-art hand reconstruction methods in texture reconstruction quality.
arXiv Detail & Related papers (2023-08-25T18:48:40Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - Towards High-Fidelity 3D Face Reconstruction from In-the-Wild Images
Using Graph Convolutional Networks [32.859340851346786]
We introduce a method to reconstruct 3D facial shapes with high-fidelity textures from single-view images in-the-wild.
Our method can generate high-quality results and outperforms state-of-the-art methods in both qualitative and quantitative comparisons.
arXiv Detail & Related papers (2020-03-12T08:06:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.