DragTex: Generative Point-Based Texture Editing on 3D Mesh
- URL: http://arxiv.org/abs/2403.02217v1
- Date: Mon, 4 Mar 2024 17:05:01 GMT
- Title: DragTex: Generative Point-Based Texture Editing on 3D Mesh
- Authors: Yudi Zhang, Qi Xu, Lei Zhang
- Abstract summary: We propose a generative point-based 3D mesh texture editing method called DragTex.
This method utilizes a diffusion model to blend locally inconsistent textures in the region near the deformed silhouette between different views.
We train LoRA using multi-view images instead of training each view individually, which significantly shortens the training time.
- Score: 11.163205302136625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating 3D textured meshes using generative artificial intelligence has
garnered significant attention recently. While existing methods support
text-based generative texture generation or editing on 3D meshes, they often
struggle to precisely control pixels of texture images through more intuitive
interaction. While 2D images can be edited generatively using drag interaction,
applying this type of methods directly to 3D mesh textures still leads to
issues such as the lack of local consistency among multiple views, error
accumulation and long training times. To address these challenges, we propose a
generative point-based 3D mesh texture editing method called DragTex. This
method utilizes a diffusion model to blend locally inconsistent textures in the
region near the deformed silhouette between different views, enabling locally
consistent texture editing. Besides, we fine-tune a decoder to reduce
reconstruction errors in the non-drag region, thereby mitigating overall error
accumulation. Moreover, we train LoRA using multi-view images instead of
training each view individually, which significantly shortens the training
time. The experimental results show that our method effectively achieves
dragging textures on 3D meshes and generates plausible textures that align with
the desired intent of drag interaction.
Related papers
- Tex4D: Zero-shot 4D Scene Texturing with Video Diffusion Models [54.35214051961381]
3D meshes are widely used in computer vision and graphics for their efficiency in animation and minimal memory use in movies, games, AR, and VR.
However, creating temporal consistent and realistic textures for mesh remains labor-intensive for professional artists.
We present 3D Tex sequences that integrates inherent geometry from mesh sequences with video diffusion models to produce consistent textures.
arXiv Detail & Related papers (2024-10-14T17:59:59Z) - MaTe3D: Mask-guided Text-based 3D-aware Portrait Editing [61.014328598895524]
We propose textbfMaTe3D: mask-guided text-based 3D-aware portrait editing.
New SDF-based 3D generator learns local and global representations with proposed SDF and density consistency losses.
Conditional Distillation on Geometry and Texture (CDGT) mitigates visual ambiguity and avoids mismatch between texture and geometry.
arXiv Detail & Related papers (2023-12-12T03:04:08Z) - GeoScaler: Geometry and Rendering-Aware Downsampling of 3D Mesh Textures [0.06990493129893112]
High-resolution texture maps are necessary for representing real-world objects accurately with 3D meshes.
GeoScaler is a method of downsampling texture maps of 3D meshes while incorporating geometric cues.
We show that the textures generated by GeoScaler deliver significantly better quality rendered images compared to those generated by traditional downsampling methods.
arXiv Detail & Related papers (2023-11-28T07:55:25Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion [115.82306502822412]
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing.
A corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing.
We study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures.
arXiv Detail & Related papers (2022-12-14T18:49:50Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - NeuTex: Neural Texture Mapping for Volumetric Neural Rendering [48.83181790635772]
We present an approach that explicitly disentangles geometry--represented as a continuous 3D volume--from appearance--represented as a continuous 2D texture map.
We demonstrate that this representation can be reconstructed using only multi-view image supervision and generates high-quality rendering results.
arXiv Detail & Related papers (2021-03-01T05:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.