A deep learning based interactive sketching system for fashion images
design
- URL: http://arxiv.org/abs/2010.04413v1
- Date: Fri, 9 Oct 2020 07:50:56 GMT
- Title: A deep learning based interactive sketching system for fashion images
design
- Authors: Yao Li, Xianggang Yu, Xiaoguang Han, Nianjuan Jiang, Kui Jia, Jiangbo
Lu
- Abstract summary: We propose an interactive system to design diverse high-quality garment images from fashion sketches and the texture information.
The major challenge behind this system is to generate high-quality and detailed texture according to the user-provided texture information.
In particular, we propose a novel bi-colored edge texture representation to synthesize textured garment images and a shading enhancer to render shading based on the grayscale edges.
- Score: 47.09122395308728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose an interactive system to design diverse high-quality
garment images from fashion sketches and the texture information. The major
challenge behind this system is to generate high-quality and detailed texture
according to the user-provided texture information. Prior works mainly use the
texture patch representation and try to map a small texture patch to a whole
garment image, hence unable to generate high-quality details. In contrast,
inspired by intrinsic image decomposition, we decompose this task into texture
synthesis and shading enhancement. In particular, we propose a novel bi-colored
edge texture representation to synthesize textured garment images and a shading
enhancer to render shading based on the grayscale edges. The bi-colored edge
representation provides simple but effective texture cues and color
constraints, so that the details can be better reconstructed. Moreover, with
the rendered shading, the synthesized garment image becomes more vivid.
Related papers
- FabricDiffusion: High-Fidelity Texture Transfer for 3D Garments Generation from In-The-Wild Clothing Images [56.63824638417697]
FabricDiffusion is a method for transferring fabric textures from a single clothing image to 3D garments of arbitrary shapes.
We show that FabricDiffusion can transfer various features from a single clothing image including texture patterns, material properties, and detailed prints and logos.
arXiv Detail & Related papers (2024-10-02T17:57:12Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - Compositional Neural Textures [25.885557234297835]
This work introduces a fully unsupervised approach for representing textures using a compositional neural model.
We represent each texton as a 2D Gaussian function whose spatial support approximates its shape, and an associated feature that encodes its detailed appearance.
This approach enables a wide range of applications, including transferring appearance from an image texture to another image, diversifying textures, revealing/modifying texture variations, edit propagation, texture animation, and direct texton manipulation.
arXiv Detail & Related papers (2024-04-18T21:09:34Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration [51.205673783866146]
We present ENTED, a new framework for blind face restoration that aims to restore high-quality and realistic portrait images.
We utilize a texture extraction and distribution framework to transfer high-quality texture features between the degraded input and reference image.
The StyleGAN-like architecture in our framework requires high-quality latent codes to generate realistic images.
arXiv Detail & Related papers (2024-01-13T04:54:59Z) - Color and Texture Dual Pipeline Lightweight Style Transfer [1.1863107884314108]
Style transfer methods typically generate a single stylized output of color and texture coupling for reference styles.
We propose a Color and Texture Dual Pipeline Lightweight Style Transfer CTDP method, which employs a dual pipeline method to simultaneously output the results of color and texture transfer.
In comparative experiments, the color and texture transfer results generated by CTDP both achieve state-of-the-art performance.
arXiv Detail & Related papers (2023-10-02T16:29:49Z) - Texture Transform Attention for Realistic Image Inpainting [6.275013056564918]
We propose a Texture Transform Attention network that better produces the missing region inpainting with fine details.
Texture Transform Attention is used to create a new reassembled texture map using fine textures and coarse semantics.
We evaluate our model end-to-end with the publicly available datasets CelebA-HQ and Places2.
arXiv Detail & Related papers (2020-12-08T06:28:51Z) - Region-adaptive Texture Enhancement for Detailed Person Image Synthesis [86.69934638569815]
RATE-Net is a novel framework for synthesizing person images with sharp texture details.
The proposed framework leverages an additional texture enhancing module to extract appearance information from the source image.
Experiments conducted on DeepFashion benchmark dataset have demonstrated the superiority of our framework compared with existing networks.
arXiv Detail & Related papers (2020-05-26T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.