Co-occurrence Based Texture Synthesis
- URL: http://arxiv.org/abs/2005.08186v2
- Date: Wed, 22 Jul 2020 19:34:09 GMT
- Title: Co-occurrence Based Texture Synthesis
- Authors: Anna Darzi, Itai Lang, Ashutosh Taklikar, Hadar Averbuch-Elor, Shai
Avidan
- Abstract summary: We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images.
We show that our solution offers a stable, intuitive and interpretable latent representation for texture synthesis.
- Score: 25.4878061402506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As image generation techniques mature, there is a growing interest in
explainable representations that are easy to understand and intuitive to
manipulate. In this work, we turn to co-occurrence statistics, which have long
been used for texture analysis, to learn a controllable texture synthesis
model. We propose a fully convolutional generative adversarial network,
conditioned locally on co-occurrence statistics, to generate arbitrarily large
images while having local, interpretable control over the texture appearance.
To encourage fidelity to the input condition, we introduce a novel
differentiable co-occurrence loss that is integrated seamlessly into our
framework in an end-to-end fashion. We demonstrate that our solution offers a
stable, intuitive and interpretable latent representation for texture
synthesis, which can be used to generate a smooth texture morph between
different textures. We further show an interactive texture tool that allows a
user to adjust local characteristics of the synthesized texture image using the
co-occurrence values directly.
Related papers
- Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - Compositional Neural Textures [25.885557234297835]
This work introduces a fully unsupervised approach for representing textures using a compositional neural model.
We represent each texton as a 2D Gaussian function whose spatial support approximates its shape, and an associated feature that encodes its detailed appearance.
This approach enables a wide range of applications, including transferring appearance from an image texture to another image, diversifying textures, revealing/modifying texture variations, edit propagation, texture animation, and direct texton manipulation.
arXiv Detail & Related papers (2024-04-18T21:09:34Z) - Generating Non-Stationary Textures using Self-Rectification [70.91414475376698]
This paper addresses the challenge of example-based non-stationary texture synthesis.
We introduce a novel twostep approach wherein users first modify a reference texture using standard image editing tools.
Our proposed method, termed "self-rectification", automatically refines this target into a coherent, seamless texture.
arXiv Detail & Related papers (2024-01-05T15:07:05Z) - Texture Representation via Analysis and Synthesis with Generative
Adversarial Networks [11.67779950826776]
We investigate data-driven texture modeling via analysis and synthesis with generative synthesis.
We adopt StyleGAN3 for synthesis and demonstrate that it produces diverse textures beyond those represented in the training data.
For texture analysis, we propose GAN using a novel latent consistency criterion for synthesized textures, and iterative refinement with Gramian loss for real textures.
arXiv Detail & Related papers (2022-12-20T03:57:11Z) - Person Image Synthesis via Denoising Diffusion Model [116.34633988927429]
We show how denoising diffusion models can be applied for high-fidelity person image synthesis.
Our results on two large-scale benchmarks and a user study demonstrate the photorealism of our proposed approach under challenging scenarios.
arXiv Detail & Related papers (2022-11-22T18:59:50Z) - Texture Generation with Neural Cellular Automata [64.70093734012121]
We learn a texture generator from a single template image.
We make claims that the behaviour exhibited by the NCA model is a learned, distributed, local algorithm to generate a texture.
arXiv Detail & Related papers (2021-05-15T22:05:46Z) - Transposer: Universal Texture Synthesis Using Feature Maps as Transposed
Convolution Filter [43.9258342767253]
We propose a novel way of using transposed convolution operation for texture synthesis.
Our framework achieves state-of-the-art texture synthesis quality based on various metrics.
arXiv Detail & Related papers (2020-07-14T17:57:59Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Region-adaptive Texture Enhancement for Detailed Person Image Synthesis [86.69934638569815]
RATE-Net is a novel framework for synthesizing person images with sharp texture details.
The proposed framework leverages an additional texture enhancing module to extract appearance information from the source image.
Experiments conducted on DeepFashion benchmark dataset have demonstrated the superiority of our framework compared with existing networks.
arXiv Detail & Related papers (2020-05-26T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.