Towards Universal Texture Synthesis by Combining Texton Broadcasting
with Noise Injection in StyleGAN-2
- URL: http://arxiv.org/abs/2203.04221v1
- Date: Tue, 8 Mar 2022 17:44:35 GMT
- Title: Towards Universal Texture Synthesis by Combining Texton Broadcasting
with Noise Injection in StyleGAN-2
- Authors: Jue Lin, Gaurav Sharma, Thrasyvoulos N. Pappas
- Abstract summary: We present a new approach for universal texture synthesis incorporating by a multi-scale texton broadcasting module in the StyleGAN-2 framework.
The texton broadcasting module introduces an inductive bias, enabling generation of broader range of textures from those with regular structures to completely ones.
- Score: 11.67779950826776
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a new approach for universal texture synthesis by incorporating a
multi-scale texton broadcasting module in the StyleGAN-2 framework. The texton
broadcasting module introduces an inductive bias, enabling generation of
broader range of textures, from those with regular structures to completely
stochastic ones. To train and evaluate the proposed approach, we construct a
comprehensive high-resolution dataset that captures the diversity of natural
textures as well as stochastic variations within each perceptually uniform
texture. Experimental results demonstrate that the proposed approach yields
significantly better quality textures than the state of the art. The ultimate
goal of this work is a comprehensive understanding of texture space.
Related papers
- Texture Image Synthesis Using Spatial GAN Based on Vision Transformers [1.6482333106552793]
We propose ViT-SGAN, a new hybrid model that fuses Vision Transformers (ViTs) with a Spatial Generative Adversarial Network (SGAN) to address the limitations of previous methods.
By incorporating specialized texture descriptors such as mean-variance (mu, sigma) and textons into the self-attention mechanism of ViTs, our model achieves superior texture synthesis.
arXiv Detail & Related papers (2025-02-03T21:39:30Z) - NeRF-Texture: Synthesizing Neural Radiance Field Textures [77.24205024987414]
We propose a novel texture synthesis method with Neural Radiance Fields (NeRF) to capture and synthesize textures from given multi-view images.
In the proposed NeRF texture representation, a scene with fine geometric details is disentangled into the meso-structure textures and the underlying base shape.
We can synthesize NeRF-based textures through patch matching of latent features.
arXiv Detail & Related papers (2024-12-13T09:41:48Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - Generating Non-Stationary Textures using Self-Rectification [70.91414475376698]
This paper addresses the challenge of example-based non-stationary texture synthesis.
We introduce a novel twostep approach wherein users first modify a reference texture using standard image editing tools.
Our proposed method, termed "self-rectification", automatically refines this target into a coherent, seamless texture.
arXiv Detail & Related papers (2024-01-05T15:07:05Z) - Pyramid Texture Filtering [86.15126028139736]
We present a simple but effective technique to smooth out textures while preserving the prominent structures.
Our method is built upon a key observation -- the coarsest level in a Gaussian pyramid often naturally eliminates textures and summarizes the main image structures.
We show that our approach is effective to separate structure from texture of different scales, local contrasts, and forms, without degrading structures or introducing visual artifacts.
arXiv Detail & Related papers (2023-05-11T02:05:30Z) - Paying U-Attention to Textures: Multi-Stage Hourglass Vision Transformer for Universal Texture Synthesis [2.8998926117101367]
We present a novel U-Attention vision Transformer for universal texture synthesis.
We exploit the natural long-range dependencies enabled by the attention mechanism to allow our approach to synthesize diverse textures.
We propose a hierarchical hourglass backbone that attends to the global structure and performs patch mapping at varying scales.
arXiv Detail & Related papers (2022-02-23T18:58:56Z) - Texture Reformer: Towards Fast and Universal Interactive Texture
Transfer [16.41438144343516]
texture reformer is a neural-based framework for interactive texture transfer with user-specified guidance.
We introduce a novel learning-free view-specific texture reformation (VSTR) operation with a new semantic map guidance strategy.
The experimental results on a variety of application scenarios demonstrate the effectiveness and superiority of our framework.
arXiv Detail & Related papers (2021-12-06T05:20:43Z) - Image Inpainting Guided by Coherence Priors of Semantics and Textures [62.92586889409379]
We introduce coherence priors between the semantics and textures which make it possible to concentrate on completing separate textures in a semantic-wise manner.
We also propose two coherence losses to constrain the consistency between the semantics and the inpainted image in terms of the overall structure and detailed textures.
arXiv Detail & Related papers (2020-12-15T02:59:37Z) - Region-adaptive Texture Enhancement for Detailed Person Image Synthesis [86.69934638569815]
RATE-Net is a novel framework for synthesizing person images with sharp texture details.
The proposed framework leverages an additional texture enhancing module to extract appearance information from the source image.
Experiments conducted on DeepFashion benchmark dataset have demonstrated the superiority of our framework compared with existing networks.
arXiv Detail & Related papers (2020-05-26T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.