Semantic Image Translation for Repairing the Texture Defects of Building
Models
- URL: http://arxiv.org/abs/2303.17418v2
- Date: Sat, 1 Apr 2023 07:55:06 GMT
- Title: Semantic Image Translation for Repairing the Texture Defects of Building
Models
- Authors: Qisen Shang, Han Hu, Haojia Yu, Bo Xu, Libin Wang, Qing Zhu
- Abstract summary: We introduce a novel approach for synthesizing faccade texture images that authentically reflect the architectural style from a structured label map.
Our proposed method is also capable of synthesizing texture images with specific styles for faccades that lack pre-existing textures.
- Score: 16.764719266178655
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The accurate representation of 3D building models in urban environments is
significantly hindered by challenges such as texture occlusion, blurring, and
missing details, which are difficult to mitigate through standard
photogrammetric texture mapping pipelines. Current image completion methods
often struggle to produce structured results and effectively handle the
intricate nature of highly-structured fa\c{c}ade textures with diverse
architectural styles. Furthermore, existing image synthesis methods encounter
difficulties in preserving high-frequency details and artificial regular
structures, which are essential for achieving realistic fa\c{c}ade texture
synthesis. To address these challenges, we introduce a novel approach for
synthesizing fa\c{c}ade texture images that authentically reflect the
architectural style from a structured label map, guided by a ground-truth
fa\c{c}ade image. In order to preserve fine details and regular structures, we
propose a regularity-aware multi-domain method that capitalizes on frequency
information and corner maps. We also incorporate SEAN blocks into our generator
to enable versatile style transfer. To generate plausible structured images
without undesirable regions, we employ image completion techniques to remove
occlusions according to semantics prior to image inference. Our proposed method
is also capable of synthesizing texture images with specific styles for
fa\c{c}ades that lack pre-existing textures, using manually annotated labels.
Experimental results on publicly available fa\c{c}ade image and 3D model
datasets demonstrate that our method yields superior results and effectively
addresses issues associated with flawed textures. The code and datasets will be
made publicly available for further research and development.
Related papers
- On Synthetic Texture Datasets: Challenges, Creation, and Curation [1.9567015559455132]
We create a dataset of 362,880 texture images that span 56 textures.
During the process of generating images, we find that NSFW safety filters in image generation pipelines are highly sensitive to texture.
arXiv Detail & Related papers (2024-09-16T14:02:18Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration [51.205673783866146]
We present ENTED, a new framework for blind face restoration that aims to restore high-quality and realistic portrait images.
We utilize a texture extraction and distribution framework to transfer high-quality texture features between the degraded input and reference image.
The StyleGAN-like architecture in our framework requires high-quality latent codes to generate realistic images.
arXiv Detail & Related papers (2024-01-13T04:54:59Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - Projective Urban Texturing [8.349665441428925]
We propose a method for automatic generation of textures for 3D city meshes in immersive urban environments.
Projective Urban Texturing (PUT) re-targets textural style from real-world panoramic images to unseen urban meshes.
PUT relies on contrastive and adversarial training of a neural architecture designed for unpaired image-to-texture translation.
arXiv Detail & Related papers (2022-01-25T14:56:52Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z) - Texture Transform Attention for Realistic Image Inpainting [6.275013056564918]
We propose a Texture Transform Attention network that better produces the missing region inpainting with fine details.
Texture Transform Attention is used to create a new reassembled texture map using fine textures and coarse semantics.
We evaluate our model end-to-end with the publicly available datasets CelebA-HQ and Places2.
arXiv Detail & Related papers (2020-12-08T06:28:51Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Guidance and Evaluation: Semantic-Aware Image Inpainting for Mixed
Scenes [54.836331922449666]
We propose a Semantic Guidance and Evaluation Network (SGE-Net) to update the structural priors and the inpainted image.
It utilizes semantic segmentation map as guidance in each scale of inpainting, under which location-dependent inferences are re-evaluated.
Experiments on real-world images of mixed scenes demonstrated the superiority of our proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2020-03-15T17:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.