Enhancing Texture Generation with High-Fidelity Using Advanced Texture
Priors
- URL: http://arxiv.org/abs/2403.05102v1
- Date: Fri, 8 Mar 2024 07:07:28 GMT
- Title: Enhancing Texture Generation with High-Fidelity Using Advanced Texture
Priors
- Authors: Kuo Xu, Maoyu Wang, Muyu Wang, Lincong Feng, Tianhui Zhang, Xiaoli Liu
- Abstract summary: We propose a high-resolution and high-fidelity texture restoration technique that uses the rough texture as the initial input.
We also introduce a background noise smoothing technique based on a self-supervised scheme to address the noise problem in current high-resolution texture synthesis schemes.
Our approach enables high-resolution texture synthesis, paving the way for high-definition and high-detail texture synthesis technology.
- Score: 1.4542583614606408
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent advancements in 2D generation technology have sparked a widespread
discussion on using 2D priors for 3D shape and texture content generation.
However, these methods often overlook the subsequent user operations, such as
texture aliasing and blurring that occur when the user acquires the 3D model
and simplifies its structure. Traditional graphics methods partially alleviate
this issue, but recent texture synthesis technologies fail to ensure
consistency with the original model's appearance and cannot achieve
high-fidelity restoration. Moreover, background noise frequently arises in
high-resolution texture synthesis, limiting the practical application of these
generation technologies.In this work, we propose a high-resolution and
high-fidelity texture restoration technique that uses the rough texture as the
initial input to enhance the consistency between the synthetic texture and the
initial texture, thereby overcoming the issues of aliasing and blurring caused
by the user's structure simplification operations. Additionally, we introduce a
background noise smoothing technique based on a self-supervised scheme to
address the noise problem in current high-resolution texture synthesis schemes.
Our approach enables high-resolution texture synthesis, paving the way for
high-definition and high-detail texture synthesis technology. Experiments
demonstrate that our scheme outperforms currently known schemes in
high-fidelity texture recovery under high-resolution conditions.
Related papers
- LaFiTe: A Generative Latent Field for 3D Native Texturing [72.05710323154288]
Existing native approaches are sparse by the absence of a powerful and versatile representation, which severely limits the fidelity and generality of their generated textures.<n>We introduce LaFiTe, which generates high-quality textures constrained by a sparse color representation and UV parameterization.
arXiv Detail & Related papers (2025-12-04T13:33:49Z) - SeqTex: Generate Mesh Textures in Video Sequence [62.766839821764144]
We introduce SeqTex, a novel end-to-end framework for training 3D texture generative models.<n>We show that SeqTex achieves state-of-the-art performance on both image-conditioned and text-conditioned 3D texture generation tasks.
arXiv Detail & Related papers (2025-07-06T07:58:36Z) - RomanTex: Decoupling 3D-aware Rotary Positional Embedded Multi-Attention Network for Texture Synthesis [10.350576861948952]
RomanTex is a multiview-based texture generation framework that integrates a multi-attention network with an underlying 3D representation.
Our method achieves state-of-the-art results in texture quality and consistency.
arXiv Detail & Related papers (2025-03-24T17:56:11Z) - SuperCarver: Texture-Consistent 3D Geometry Super-Resolution for High-Fidelity Surface Detail Generation [70.76810765911499]
SuperCarver is a 3D geometry framework specifically tailored for adding texture-consistent surface details to given coarse meshes.
To achieve geometric detail generation, we develop a deterministic prior-guided normal diffusion model fine-tuned on a dataset of paired low-poly and high-poly normal renderings.
To optimize mesh structures from potentially imperfect normal map predictions, we design a simple yet effective noise-resistant inverse rendering scheme.
arXiv Detail & Related papers (2025-03-12T14:38:45Z) - Texture Image Synthesis Using Spatial GAN Based on Vision Transformers [1.6482333106552793]
We propose ViT-SGAN, a new hybrid model that fuses Vision Transformers (ViTs) with a Spatial Generative Adversarial Network (SGAN) to address the limitations of previous methods.
By incorporating specialized texture descriptors such as mean-variance (mu, sigma) and textons into the self-attention mechanism of ViTs, our model achieves superior texture synthesis.
arXiv Detail & Related papers (2025-02-03T21:39:30Z) - GausSurf: Geometry-Guided 3D Gaussian Splatting for Surface Reconstruction [79.42244344704154]
GausSurf employs geometry guidance from multi-view consistency in texture-rich areas and normal priors in texture-less areas of a scene.
Our method surpasses state-of-the-art methods in terms of reconstruction quality and computation time.
arXiv Detail & Related papers (2024-11-29T03:54:54Z) - TexGen: Text-Guided 3D Texture Generation with Multi-view Sampling and Resampling [37.67373829836975]
We present TexGen, a novel multi-view sampling and resampling framework for texture generation.
Our proposed method produces significantly better texture quality for diverse 3D objects with a high degree of view consistency.
Our proposed texture generation technique can also be applied to texture editing while preserving the original identity.
arXiv Detail & Related papers (2024-08-02T14:24:40Z) - Meta 3D TextureGen: Fast and Consistent Texture Generation for 3D Objects [54.80813150893719]
We introduce Meta 3D TextureGen: a new feedforward method comprised of two sequential networks aimed at generating high-quality textures in less than 20 seconds.
Our method state-of-the-art results in quality and speed by conditioning a text-to-image model on 3D semantics in 2D space and fusing them into a complete and high-resolution UV texture map.
In addition, we introduce a texture enhancement network that is capable of up-scaling any texture by an arbitrary ratio, producing 4k pixel resolution textures.
arXiv Detail & Related papers (2024-07-02T17:04:34Z) - InTeX: Interactive Text-to-texture Synthesis via Unified Depth-aware Inpainting [46.330305910974246]
We introduce InteX, a novel framework for interactive text-to-texture synthesis.
InteX includes a user-friendly interface that facilitates interaction and control throughout the synthesis process.
We develop a depth-aware inpainting model that integrates depth information with inpainting cues, effectively mitigating 3D inconsistencies.
arXiv Detail & Related papers (2024-03-18T15:31:57Z) - ConTex-Human: Free-View Rendering of Human from a Single Image with
Texture-Consistent Synthesis [49.28239918969784]
We introduce a texture-consistent back view synthesis module that could transfer the reference image content to the back view.
We also propose a visibility-aware patch consistency regularization for texture mapping and refinement combined with the synthesized back view texture.
arXiv Detail & Related papers (2023-11-28T13:55:53Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - Text-guided High-definition Consistency Texture Model [0.0]
We present the High-definition Consistency Texture Model (HCTM), a novel method that can generate high-definition textures for 3D meshes according to the text prompts.
We achieve this by leveraging a pre-trained depth-to-image diffusion model to generate single viewpoint results based on the text prompt and a depth map.
Our proposed approach has demonstrated promising results in generating high-definition and consistent textures for 3D meshes.
arXiv Detail & Related papers (2023-05-10T05:09:05Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - Deep Tiling: Texture Tile Synthesis Using a Deep Learning Approach [0.0]
In many cases a texture image cannot cover a large 3D model surface because of its small resolution.
Deep learning based texture synthesis has proven to be very effective in such cases.
We propose a novel approach to example-based texture synthesis by using a robust deep learning process.
arXiv Detail & Related papers (2021-03-14T18:17:37Z) - GramGAN: Deep 3D Texture Synthesis From 2D Exemplars [7.553635339893189]
We present a novel texture synthesis framework, enabling the generation of infinite, high-quality 3D textures given a 2D exemplar image.
Inspired by recent advances in natural texture synthesis, we train deep neural models to generate textures by non-linearly combining learned noise frequencies.
To achieve a highly realistic output conditioned on an exemplar patch, we propose a novel loss function that combines ideas from both style transfer and generative adversarial networks.
arXiv Detail & Related papers (2020-06-29T15:22:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.