Compositional Neural Textures
- URL: http://arxiv.org/abs/2404.12509v2
- Date: Mon, 23 Sep 2024 03:01:59 GMT
- Title: Compositional Neural Textures
- Authors: Peihan Tu, Li-Yi Wei, Matthias Zwicker,
- Abstract summary: This work introduces a fully unsupervised approach for representing textures using a compositional neural model.
We represent each texton as a 2D Gaussian function whose spatial support approximates its shape, and an associated feature that encodes its detailed appearance.
This approach enables a wide range of applications, including transferring appearance from an image texture to another image, diversifying textures, revealing/modifying texture variations, edit propagation, texture animation, and direct texton manipulation.
- Score: 25.885557234297835
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Texture plays a vital role in enhancing visual richness in both real photographs and computer-generated imagery. However, the process of editing textures often involves laborious and repetitive manual adjustments of textons, which are the recurring local patterns that characterize textures. This work introduces a fully unsupervised approach for representing textures using a compositional neural model that captures individual textons. We represent each texton as a 2D Gaussian function whose spatial support approximates its shape, and an associated feature that encodes its detailed appearance. By modeling a texture as a discrete composition of Gaussian textons, the representation offers both expressiveness and ease of editing. Textures can be edited by modifying the compositional Gaussians within the latent space, and new textures can be efficiently synthesized by feeding the modified Gaussians through a generator network in a feed-forward manner. This approach enables a wide range of applications, including transferring appearance from an image texture to another image, diversifying textures,texture interpolation, revealing/modifying texture variations, edit propagation, texture animation, and direct texton manipulation. The proposed approach contributes to advancing texture analysis, modeling, and editing techniques, and opens up new possibilities for creating visually appealing images with controllable textures.
Related papers
- FabricDiffusion: High-Fidelity Texture Transfer for 3D Garments Generation from In-The-Wild Clothing Images [56.63824638417697]
FabricDiffusion is a method for transferring fabric textures from a single clothing image to 3D garments of arbitrary shapes.
We show that FabricDiffusion can transfer various features from a single clothing image including texture patterns, material properties, and detailed prints and logos.
arXiv Detail & Related papers (2024-10-02T17:57:12Z) - TextureDiffusion: Target Prompt Disentangled Editing for Various Texture Transfer [32.53299128227546]
We propose TextureDiffusion, a tuning-free image editing method applied to various texture transfer.
query features in self-attention and features in residual blocks are utilized to preserve the structure of the input image.
To maintain the background, we introduce an edit localization technique which blends the self-attention results and the intermediate latents.
arXiv Detail & Related papers (2024-09-15T04:34:38Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - Controlling Geometric Abstraction and Texture for Artistic Images [0.22835610890984162]
We present a novel method for the interactive control of geometric abstraction and texture in artistic images.
Previous example-based stylization methods often entangle shape, texture, and color, while generative methods for image synthesis generally make assumptions about the input image.
By contrast, our holistic approach spatially decomposes the input into shapes and a parametric representation of high-frequency details comprising the image's texture, thus enabling independent control of color and texture.
arXiv Detail & Related papers (2023-07-31T20:37:43Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z) - Co-occurrence Based Texture Synthesis [25.4878061402506]
We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images.
We show that our solution offers a stable, intuitive and interpretable latent representation for texture synthesis.
arXiv Detail & Related papers (2020-05-17T08:01:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.