Generating Non-Stationary Textures using Self-Rectification
- URL: http://arxiv.org/abs/2401.02847v2
- Date: Tue, 30 Jan 2024 10:32:16 GMT
- Title: Generating Non-Stationary Textures using Self-Rectification
- Authors: Yang Zhou, Rongjun Xiao, Dani Lischinski, Daniel Cohen-Or, Hui Huang
- Abstract summary: This paper addresses the challenge of example-based non-stationary texture synthesis.
We introduce a novel twostep approach wherein users first modify a reference texture using standard image editing tools.
Our proposed method, termed "self-rectification", automatically refines this target into a coherent, seamless texture.
- Score: 70.91414475376698
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper addresses the challenge of example-based non-stationary texture
synthesis. We introduce a novel twostep approach wherein users first modify a
reference texture using standard image editing tools, yielding an initial rough
target for the synthesis. Subsequently, our proposed method, termed
"self-rectification", automatically refines this target into a coherent,
seamless texture, while faithfully preserving the distinct visual
characteristics of the reference exemplar. Our method leverages a pre-trained
diffusion network, and uses self-attention mechanisms, to gradually align the
synthesized texture with the reference, ensuring the retention of the
structures in the provided target. Through experimental validation, our
approach exhibits exceptional proficiency in handling non-stationary textures,
demonstrating significant advancements in texture synthesis when compared to
existing state-of-the-art techniques. Code is available at
https://github.com/xiaorongjun000/Self-Rectification
Related papers
- RoCoTex: A Robust Method for Consistent Texture Synthesis with Diffusion Models [3.714901836138171]
We propose a robust text-to-texture method for generating consistent and seamless textures that are well aligned with the mesh.
Our method leverages state-of-the-art 2D diffusion models, including SDXL and multiple ControlNets, to capture structural features and intricate details in the generated textures.
arXiv Detail & Related papers (2024-09-30T06:29:50Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - GenesisTex: Adapting Image Denoising Diffusion to Texture Space [15.907134430301133]
GenesisTex is a novel method for synthesizing textures for 3D geometries from text descriptions.
We maintain a latent texture map for each viewpoint, which is updated with predicted noise on the rendering of the corresponding viewpoint.
Global consistency is achieved through the integration of style consistency mechanisms within the noise prediction network.
arXiv Detail & Related papers (2024-03-26T15:15:15Z) - TexTile: A Differentiable Metric for Texture Tileability [10.684366243276198]
We introduce TexTile, a novel differentiable metric to quantify the degree upon which a texture image can bed with itself.
Existing methods for tileable texture synthesis focus on general texture quality, but lack explicit analysis of the intrinsic properties of a texture.
arXiv Detail & Related papers (2024-03-19T17:59:09Z) - ConTex-Human: Free-View Rendering of Human from a Single Image with
Texture-Consistent Synthesis [49.28239918969784]
We introduce a texture-consistent back view synthesis module that could transfer the reference image content to the back view.
We also propose a visibility-aware patch consistency regularization for texture mapping and refinement combined with the synthesized back view texture.
arXiv Detail & Related papers (2023-11-28T13:55:53Z) - TexPose: Neural Texture Learning for Self-Supervised 6D Object Pose
Estimation [55.94900327396771]
We introduce neural texture learning for 6D object pose estimation from synthetic data.
We learn to predict realistic texture of objects from real image collections.
We learn pose estimation from pixel-perfect synthetic data.
arXiv Detail & Related papers (2022-12-25T13:36:32Z) - SeamlessGAN: Self-Supervised Synthesis of Tileable Texture Maps [3.504542161036043]
We present SeamlessGAN, a method capable of automatically generating tileable texture maps from a single input exemplar.
In contrast to most existing methods, focused solely on solving the synthesis problem, our work tackles both problems, synthesis and tileability, simultaneously.
arXiv Detail & Related papers (2022-01-13T18:24:26Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z) - Region-adaptive Texture Enhancement for Detailed Person Image Synthesis [86.69934638569815]
RATE-Net is a novel framework for synthesizing person images with sharp texture details.
The proposed framework leverages an additional texture enhancing module to extract appearance information from the source image.
Experiments conducted on DeepFashion benchmark dataset have demonstrated the superiority of our framework compared with existing networks.
arXiv Detail & Related papers (2020-05-26T02:33:21Z) - Co-occurrence Based Texture Synthesis [25.4878061402506]
We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images.
We show that our solution offers a stable, intuitive and interpretable latent representation for texture synthesis.
arXiv Detail & Related papers (2020-05-17T08:01:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.