Texture Generation on 3D Meshes with Point-UV Diffusion
- URL: http://arxiv.org/abs/2308.10490v1
- Date: Mon, 21 Aug 2023 06:20:54 GMT
- Title: Texture Generation on 3D Meshes with Point-UV Diffusion
- Authors: Xin Yu, Peng Dai, Wenbo Li, Lan Ma, Zhengzhe Liu, Xiaojuan Qi
- Abstract summary: We present Point-UV diffusion, a coarse-to-fine pipeline that marries the denoising diffusion model with UV mapping to generate high-quality texture images in UV space.
Our method can process meshes of any genus, generating diversified, geometry-compatible, and high-fidelity textures.
- Score: 86.69672057856243
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we focus on synthesizing high-quality textures on 3D meshes. We
present Point-UV diffusion, a coarse-to-fine pipeline that marries the
denoising diffusion model with UV mapping to generate 3D consistent and
high-quality texture images in UV space. We start with introducing a point
diffusion model to synthesize low-frequency texture components with our
tailored style guidance to tackle the biased color distribution. The derived
coarse texture offers global consistency and serves as a condition for the
subsequent UV diffusion stage, aiding in regularizing the model to generate a
3D consistent UV texture image. Then, a UV diffusion model with hybrid
conditions is developed to enhance the texture fidelity in the 2D UV space. Our
method can process meshes of any genus, generating diversified,
geometry-compatible, and high-fidelity textures. Code is available at
https://cvmi-lab.github.io/Point-UV-Diffusion
Related papers
- TEXGen: a Generative Diffusion Model for Mesh Textures [63.43159148394021]
We focus on the fundamental problem of learning in the UV texture space itself.
We propose a scalable network architecture that interleaves convolutions on UV maps with attention layers on point clouds.
We train a 700 million parameter diffusion model that can generate UV texture maps guided by text prompts and single-view images.
arXiv Detail & Related papers (2024-11-22T05:22:11Z) - UV-free Texture Generation with Denoising and Geodesic Heat Diffusions [50.55154348768031]
Seams, wasted UV space, and varying resolution over the surface are the most prominent issues of the standard UV-based processing mechanism of meshes.
We propose to represent textures as coloured point-cloud colours generated by a denoising diffusion model constrained to operate on the surface of 3D meshes.
arXiv Detail & Related papers (2024-08-29T17:57:05Z) - UVMap-ID: A Controllable and Personalized UV Map Generative Model [67.71022515856653]
We introduce UVMap-ID, a controllable and personalized UV Map generative model.
Unlike traditional large-scale training methods in 2D, we propose to fine-tune a pre-trained text-to-image diffusion model.
Both quantitative and qualitative analyses demonstrate the effectiveness of our method in controllable and personalized UV Map generation.
arXiv Detail & Related papers (2024-04-22T20:30:45Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - TUVF: Learning Generalizable Texture UV Radiance Fields [32.417062841312976]
We introduce Texture UV Radiance Fields (TUVF) that generate textures in a learnable UV sphere space rather than directly on the 3D shape.
TUVF allows the texture to be disentangled from the underlying shape and transferable to other shapes that share the same UV space.
We perform our experiments on synthetic and real-world object datasets.
arXiv Detail & Related papers (2023-05-04T17:58:05Z) - FFHQ-UV: Normalized Facial UV-Texture Dataset for 3D Face Reconstruction [46.3392612457273]
This dataset contains over 50,000 high-quality texture UV-maps with even illuminations, neutral expressions, and cleaned facial regions.
Our pipeline utilizes the recent advances in StyleGAN-based facial image editing approaches.
Experiments show that our method improves the reconstruction accuracy over state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-25T03:21:05Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.