TwinTex: Geometry-aware Texture Generation for Abstracted 3D
Architectural Models
- URL: http://arxiv.org/abs/2309.11258v1
- Date: Wed, 20 Sep 2023 12:33:53 GMT
- Title: TwinTex: Geometry-aware Texture Generation for Abstracted 3D
Architectural Models
- Authors: Weidan Xiong, Hongqian Zhang, Botao Peng, Ziyu Hu, Yongli Wu, Jianwei
Guo, Hui Huang
- Abstract summary: We present TwinTex, the first automatic texture mapping framework to generate a photo-realistic texture for a piece-wise planar proxy.
Our approach surpasses state-of-the-art texture mapping methods in terms of high-fidelity quality and reaches a human-expert production level with much less effort.
- Score: 13.248386665044087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Coarse architectural models are often generated at scales ranging from
individual buildings to scenes for downstream applications such as Digital Twin
City, Metaverse, LODs, etc. Such piece-wise planar models can be abstracted as
twins from 3D dense reconstructions. However, these models typically lack
realistic texture relative to the real building or scene, making them
unsuitable for vivid display or direct reference. In this paper, we present
TwinTex, the first automatic texture mapping framework to generate a
photo-realistic texture for a piece-wise planar proxy. Our method addresses
most challenges occurring in such twin texture generation. Specifically, for
each primitive plane, we first select a small set of photos with greedy
heuristics considering photometric quality, perspective quality and facade
texture completeness. Then, different levels of line features (LoLs) are
extracted from the set of selected photos to generate guidance for later steps.
With LoLs, we employ optimization algorithms to align texture with geometry
from local to global. Finally, we fine-tune a diffusion model with a multi-mask
initialization component and a new dataset to inpaint the missing region.
Experimental results on many buildings, indoor scenes and man-made objects of
varying complexity demonstrate the generalization ability of our algorithm. Our
approach surpasses state-of-the-art texture mapping methods in terms of
high-fidelity quality and reaches a human-expert production level with much
less effort. Project page: https://vcc.tech/research/2023/TwinTex.
Related papers
- DreamPolish: Domain Score Distillation With Progressive Geometry Generation [66.94803919328815]
We introduce DreamPolish, a text-to-3D generation model that excels in producing refined geometry and high-quality textures.
In the geometry construction phase, our approach leverages multiple neural representations to enhance the stability of the synthesis process.
In the texture generation phase, we introduce a novel score distillation objective, namely domain score distillation (DSD), to guide neural representations toward such a domain.
arXiv Detail & Related papers (2024-11-03T15:15:01Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - GenesisTex: Adapting Image Denoising Diffusion to Texture Space [15.907134430301133]
GenesisTex is a novel method for synthesizing textures for 3D geometries from text descriptions.
We maintain a latent texture map for each viewpoint, which is updated with predicted noise on the rendering of the corresponding viewpoint.
Global consistency is achieved through the integration of style consistency mechanisms within the noise prediction network.
arXiv Detail & Related papers (2024-03-26T15:15:15Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - Projective Urban Texturing [8.349665441428925]
We propose a method for automatic generation of textures for 3D city meshes in immersive urban environments.
Projective Urban Texturing (PUT) re-targets textural style from real-world panoramic images to unseen urban meshes.
PUT relies on contrastive and adversarial training of a neural architecture designed for unpaired image-to-texture translation.
arXiv Detail & Related papers (2022-01-25T14:56:52Z) - Unsupervised High-Fidelity Facial Texture Generation and Reconstruction [20.447635896077454]
We propose a novel unified pipeline for both tasks, generation of both geometry and texture, and recovery of high-fidelity texture.
Our texture model is learned, in an unsupervised fashion, from natural images as opposed to scanned texture maps.
By applying precise 3DMM fitting, we can seamlessly integrate our modeled textures into synthetically generated background images.
arXiv Detail & Related papers (2021-10-10T10:59:04Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.