Texture Representation via Analysis and Synthesis with Generative
Adversarial Networks
- URL: http://arxiv.org/abs/2212.09983v1
- Date: Tue, 20 Dec 2022 03:57:11 GMT
- Title: Texture Representation via Analysis and Synthesis with Generative
Adversarial Networks
- Authors: Jue Lin, Gaurav Sharma, Thrasyvoulos N. Pappas
- Abstract summary: We investigate data-driven texture modeling via analysis and synthesis with generative synthesis.
We adopt StyleGAN3 for synthesis and demonstrate that it produces diverse textures beyond those represented in the training data.
For texture analysis, we propose GAN using a novel latent consistency criterion for synthesized textures, and iterative refinement with Gramian loss for real textures.
- Score: 11.67779950826776
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate data-driven texture modeling via analysis and synthesis with
generative adversarial networks. For network training and testing, we have
compiled a diverse set of spatially homogeneous textures, ranging from
stochastic to regular. We adopt StyleGAN3 for synthesis and demonstrate that it
produces diverse textures beyond those represented in the training data. For
texture analysis, we propose GAN inversion using a novel latent domain
reconstruction consistency criterion for synthesized textures, and iterative
refinement with Gramian loss for real textures. We propose perceptual
procedures for evaluating network capabilities, exploring the global and local
behavior of latent space trajectories, and comparing with existing texture
analysis-synthesis techniques.
Related papers
- Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - Generating Non-Stationary Textures using Self-Rectification [70.91414475376698]
This paper addresses the challenge of example-based non-stationary texture synthesis.
We introduce a novel twostep approach wherein users first modify a reference texture using standard image editing tools.
Our proposed method, termed "self-rectification", automatically refines this target into a coherent, seamless texture.
arXiv Detail & Related papers (2024-01-05T15:07:05Z) - ConTex-Human: Free-View Rendering of Human from a Single Image with
Texture-Consistent Synthesis [49.28239918969784]
We introduce a texture-consistent back view synthesis module that could transfer the reference image content to the back view.
We also propose a visibility-aware patch consistency regularization for texture mapping and refinement combined with the synthesized back view texture.
arXiv Detail & Related papers (2023-11-28T13:55:53Z) - Neural Textured Deformable Meshes for Robust Analysis-by-Synthesis [17.920305227880245]
Our paper formulates triple vision tasks in a consistent manner using approximate analysis-by-synthesis.
We show that our analysis-by-synthesis is much more robust than conventional neural networks when evaluated on real-world images.
arXiv Detail & Related papers (2023-05-31T18:45:02Z) - Towards Universal Texture Synthesis by Combining Texton Broadcasting
with Noise Injection in StyleGAN-2 [11.67779950826776]
We present a new approach for universal texture synthesis incorporating by a multi-scale texton broadcasting module in the StyleGAN-2 framework.
The texton broadcasting module introduces an inductive bias, enabling generation of broader range of textures from those with regular structures to completely ones.
arXiv Detail & Related papers (2022-03-08T17:44:35Z) - Image Synthesis via Semantic Composition [74.68191130898805]
We present a novel approach to synthesize realistic images based on their semantic layouts.
It hypothesizes that for objects with similar appearance, they share similar representation.
Our method establishes dependencies between regions according to their appearance correlation, yielding both spatially variant and associated representations.
arXiv Detail & Related papers (2021-09-15T02:26:07Z) - Semi-supervised Synthesis of High-Resolution Editable Textures for 3D
Humans [14.098628848491147]
We introduce a novel approach to generate diverse high fidelity texture maps for 3D human meshes in a semi-supervised setup.
Given a segmentation mask defining the layout of the semantic regions in the texture map, our network generates high-resolution textures with a variety of styles, that are then used for rendering purposes.
arXiv Detail & Related papers (2021-03-31T17:58:34Z) - Synthetic Data and Hierarchical Object Detection in Overhead Imagery [0.0]
We develop novel synthetic data generation and augmentation techniques for enhancing low/zero-sample learning in satellite imagery.
To test the effectiveness of synthetic imagery, we employ it in the training of detection models and our two stage model, and evaluate the resulting models on real satellite images.
arXiv Detail & Related papers (2021-01-29T22:52:47Z) - Region-adaptive Texture Enhancement for Detailed Person Image Synthesis [86.69934638569815]
RATE-Net is a novel framework for synthesizing person images with sharp texture details.
The proposed framework leverages an additional texture enhancing module to extract appearance information from the source image.
Experiments conducted on DeepFashion benchmark dataset have demonstrated the superiority of our framework compared with existing networks.
arXiv Detail & Related papers (2020-05-26T02:33:21Z) - Co-occurrence Based Texture Synthesis [25.4878061402506]
We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images.
We show that our solution offers a stable, intuitive and interpretable latent representation for texture synthesis.
arXiv Detail & Related papers (2020-05-17T08:01:44Z) - Towards Analysis-friendly Face Representation with Scalable Feature and
Texture Compression [113.30411004622508]
We show that a universal and collaborative visual information representation can be achieved in a hierarchical way.
Based on the strong generative capability of deep neural networks, the gap between the base feature layer and enhancement layer is further filled with the feature level texture reconstruction.
To improve the efficiency of the proposed framework, the base layer neural network is trained in a multi-task manner.
arXiv Detail & Related papers (2020-04-21T14:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.