Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models
- URL: http://arxiv.org/abs/2312.13913v2
- Date: Fri, 22 Dec 2023 06:27:43 GMT
- Title: Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models
- Authors: Xianfang Zeng, Xin Chen, Zhongqi Qi, Wen Liu, Zibo Zhao, Zhibin Wang,
Bin Fu, Yong Liu, Gang Yu
- Abstract summary: Paint3D produces high-resolution, lighting-less, and diverse 2K UV texture maps for un-textured 3D meshes on text or image inputs.
- Score: 34.715076045118444
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents Paint3D, a novel coarse-to-fine generative framework that
is capable of producing high-resolution, lighting-less, and diverse 2K UV
texture maps for untextured 3D meshes conditioned on text or image inputs. The
key challenge addressed is generating high-quality textures without embedded
illumination information, which allows the textures to be re-lighted or
re-edited within modern graphics pipelines. To achieve this, our method first
leverages a pre-trained depth-aware 2D diffusion model to generate
view-conditional images and perform multi-view texture fusion, producing an
initial coarse texture map. However, as 2D models cannot fully represent 3D
shapes and disable lighting effects, the coarse texture map exhibits incomplete
areas and illumination artifacts. To resolve this, we train separate UV
Inpainting and UVHD diffusion models specialized for the shape-aware refinement
of incomplete areas and the removal of illumination artifacts. Through this
coarse-to-fine process, Paint3D can produce high-quality 2K UV textures that
maintain semantic consistency while being lighting-less, significantly
advancing the state-of-the-art in texturing 3D objects.
Related papers
- TEXGen: a Generative Diffusion Model for Mesh Textures [63.43159148394021]
We focus on the fundamental problem of learning in the UV texture space itself.
We propose a scalable network architecture that interleaves convolutions on UV maps with attention layers on point clouds.
We train a 700 million parameter diffusion model that can generate UV texture maps guided by text prompts and single-view images.
arXiv Detail & Related papers (2024-11-22T05:22:11Z) - MVPaint: Synchronized Multi-View Diffusion for Painting Anything 3D [63.9188712646076]
Texturing is a 3D asset production, which enhances the visual appeal and visual appeal.
Despite recent advancements, methods often yield subpar results, primarily due to local discontinuities.
We propose a novel framework called MVPaint, which can generate high-resolution, seamless multiview consistency.
arXiv Detail & Related papers (2024-11-04T17:59:39Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - Texture Generation on 3D Meshes with Point-UV Diffusion [86.69672057856243]
We present Point-UV diffusion, a coarse-to-fine pipeline that marries the denoising diffusion model with UV mapping to generate high-quality texture images in UV space.
Our method can process meshes of any genus, generating diversified, geometry-compatible, and high-fidelity textures.
arXiv Detail & Related papers (2023-08-21T06:20:54Z) - Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion
Prior [36.40582157854088]
In this work, we investigate the problem of creating high-fidelity 3D content from only a single image.
We leverage prior knowledge from a well-trained 2D diffusion model to act as 3D-aware supervision for 3D creation.
Our method presents the first attempt to achieve high-quality 3D creation from a single image for general objects and enables various applications such as text-to-3D creation and texture editing.
arXiv Detail & Related papers (2023-03-24T17:54:22Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.