SeamlessGAN: Self-Supervised Synthesis of Tileable Texture Maps
- URL: http://arxiv.org/abs/2201.05120v1
- Date: Thu, 13 Jan 2022 18:24:26 GMT
- Title: SeamlessGAN: Self-Supervised Synthesis of Tileable Texture Maps
- Authors: Carlos Rodriguez-Pardo and Elena Garces
- Abstract summary: We present SeamlessGAN, a method capable of automatically generating tileable texture maps from a single input exemplar.
In contrast to most existing methods, focused solely on solving the synthesis problem, our work tackles both problems, synthesis and tileability, simultaneously.
- Score: 3.504542161036043
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present SeamlessGAN, a method capable of automatically generating tileable
texture maps from a single input exemplar. In contrast to most existing
methods, focused solely on solving the synthesis problem, our work tackles both
problems, synthesis and tileability, simultaneously. Our key idea is to realize
that tiling a latent space within a generative network trained using
adversarial expansion techniques produces outputs with continuity at the seam
intersection that can be then be turned into tileable images by cropping the
central area. Since not every value of the latent space is valid to produce
high-quality outputs, we leverage the discriminator as a perceptual error
metric capable of identifying artifact-free textures during a sampling process.
Further, in contrast to previous work on deep texture synthesis, our model is
designed and optimized to work with multi-layered texture representations,
enabling textures composed of multiple maps such as albedo, normals, etc. We
extensively test our design choices for the network architecture, loss function
and sampling parameters. We show qualitatively and quantitatively that our
approach outperforms previous methods and works for textures of different
types.
Related papers
- Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - GenesisTex: Adapting Image Denoising Diffusion to Texture Space [15.907134430301133]
GenesisTex is a novel method for synthesizing textures for 3D geometries from text descriptions.
We maintain a latent texture map for each viewpoint, which is updated with predicted noise on the rendering of the corresponding viewpoint.
Global consistency is achieved through the integration of style consistency mechanisms within the noise prediction network.
arXiv Detail & Related papers (2024-03-26T15:15:15Z) - TexTile: A Differentiable Metric for Texture Tileability [10.684366243276198]
We introduce TexTile, a novel differentiable metric to quantify the degree upon which a texture image can bed with itself.
Existing methods for tileable texture synthesis focus on general texture quality, but lack explicit analysis of the intrinsic properties of a texture.
arXiv Detail & Related papers (2024-03-19T17:59:09Z) - ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration [51.205673783866146]
We present ENTED, a new framework for blind face restoration that aims to restore high-quality and realistic portrait images.
We utilize a texture extraction and distribution framework to transfer high-quality texture features between the degraded input and reference image.
The StyleGAN-like architecture in our framework requires high-quality latent codes to generate realistic images.
arXiv Detail & Related papers (2024-01-13T04:54:59Z) - Generating Non-Stationary Textures using Self-Rectification [70.91414475376698]
This paper addresses the challenge of example-based non-stationary texture synthesis.
We introduce a novel twostep approach wherein users first modify a reference texture using standard image editing tools.
Our proposed method, termed "self-rectification", automatically refines this target into a coherent, seamless texture.
arXiv Detail & Related papers (2024-01-05T15:07:05Z) - Diffusion-based Holistic Texture Rectification and Synthesis [26.144666226217062]
Traditional texture synthesis approaches focus on generating textures from pristine samples.
We propose a framework that synthesizes holistic textures from degraded samples in natural images.
arXiv Detail & Related papers (2023-09-26T08:44:46Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z) - Image Inpainting Guided by Coherence Priors of Semantics and Textures [62.92586889409379]
We introduce coherence priors between the semantics and textures which make it possible to concentrate on completing separate textures in a semantic-wise manner.
We also propose two coherence losses to constrain the consistency between the semantics and the inpainted image in terms of the overall structure and detailed textures.
arXiv Detail & Related papers (2020-12-15T02:59:37Z) - Transposer: Universal Texture Synthesis Using Feature Maps as Transposed
Convolution Filter [43.9258342767253]
We propose a novel way of using transposed convolution operation for texture synthesis.
Our framework achieves state-of-the-art texture synthesis quality based on various metrics.
arXiv Detail & Related papers (2020-07-14T17:57:59Z) - Region-adaptive Texture Enhancement for Detailed Person Image Synthesis [86.69934638569815]
RATE-Net is a novel framework for synthesizing person images with sharp texture details.
The proposed framework leverages an additional texture enhancing module to extract appearance information from the source image.
Experiments conducted on DeepFashion benchmark dataset have demonstrated the superiority of our framework compared with existing networks.
arXiv Detail & Related papers (2020-05-26T02:33:21Z) - Co-occurrence Based Texture Synthesis [25.4878061402506]
We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images.
We show that our solution offers a stable, intuitive and interpretable latent representation for texture synthesis.
arXiv Detail & Related papers (2020-05-17T08:01:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.