Learning in a Single Domain for Non-Stationary Multi-Texture Synthesis
- URL: http://arxiv.org/abs/2305.06200v2
- Date: Wed, 6 Sep 2023 04:16:17 GMT
- Title: Learning in a Single Domain for Non-Stationary Multi-Texture Synthesis
- Authors: Xudong Xie, Zhen Zhu, Zijie Wu, Zhiliang Xu, Yingying Zhu
- Abstract summary: Non-stationary textures have large scale variance and can hardly be synthesized through one model.
We propose a multi-scale generator to capture structural patterns of various scales and effectively synthesize textures with a minor cost.
We present a category-specific training strategy to focus on learning texture pattern of a specific domain.
- Score: 9.213030142986417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper aims for a new generation task: non-stationary multi-texture
synthesis, which unifies synthesizing multiple non-stationary textures in a
single model. Most non-stationary textures have large scale variance and can
hardly be synthesized through one model. To combat this, we propose a
multi-scale generator to capture structural patterns of various scales and
effectively synthesize textures with a minor cost. However, it is still hard to
handle textures of different categories with different texture patterns.
Therefore, we present a category-specific training strategy to focus on
learning texture pattern of a specific domain. Interestingly, once trained, our
model is able to produce multi-pattern generations with dynamic variations
without the need to finetune the model for different styles. Moreover, an
objective evaluation metric is designed for evaluating the quality of texture
expansion and global structure consistency. To our knowledge, ours is the first
scheme for this challenging task, including model, training, and evaluation.
Experimental results demonstrate the proposed method achieves superior
performance and time efficiency. The code will be available after the
publication.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Style-Hallucinated Dual Consistency Learning for Domain Generalized
Semantic Segmentation [117.3856882511919]
We propose the Style-HAllucinated Dual consistEncy learning (SHADE) framework to handle domain shift.
Our SHADE yields significant improvement and outperforms state-of-the-art methods by 5.07% and 8.35% on the average mIoU of three real-world datasets.
arXiv Detail & Related papers (2022-04-06T02:49:06Z) - SeamlessGAN: Self-Supervised Synthesis of Tileable Texture Maps [3.504542161036043]
We present SeamlessGAN, a method capable of automatically generating tileable texture maps from a single input exemplar.
In contrast to most existing methods, focused solely on solving the synthesis problem, our work tackles both problems, synthesis and tileability, simultaneously.
arXiv Detail & Related papers (2022-01-13T18:24:26Z) - Texture synthesis via projection onto multiscale, multilayer statistics [0.0]
We present a new model for texture synthesis based on a multiscale, multilayer feature extractor.
We explain the necessity of the different types of pre-defined wavelet filters used in our model and the advantages of multilayer structures for image synthesis.
arXiv Detail & Related papers (2021-05-22T23:32:34Z) - Texture Generation with Neural Cellular Automata [64.70093734012121]
We learn a texture generator from a single template image.
We make claims that the behaviour exhibited by the NCA model is a learned, distributed, local algorithm to generate a texture.
arXiv Detail & Related papers (2021-05-15T22:05:46Z) - MTCRNN: A multi-scale RNN for directed audio texture synthesis [0.0]
We introduce a novel modelling approach for textures, combining recurrent neural networks trained at different levels of abstraction with a conditioning strategy that allows for user-directed synthesis.
We demonstrate the model's performance on a variety of datasets, examine its performance on various metrics, and discuss some potential applications.
arXiv Detail & Related papers (2020-11-25T09:13:53Z) - A Generative Model for Texture Synthesis based on Optimal Transport
between Feature Distributions [8.102785819558978]
We show how to use our framework to learn a feed-forward neural network that can synthesize on-the-fly new textures of arbitrary size.
We show how to use our framework to learn a feed-forward neural network that can synthesize on-the-fly new textures of arbitrary size in a very fast manner.
arXiv Detail & Related papers (2020-06-19T13:32:55Z) - Learning Texture Invariant Representation for Domain Adaptation of
Semantic Segmentation [19.617821473205694]
It is challenging for a model trained with synthetic data to generalize to real data.
We diversity the texture of synthetic images using a style transfer algorithm.
We fine-tune the model with self-training to get direct supervision of the target texture.
arXiv Detail & Related papers (2020-03-02T13:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.