Color and Texture Dual Pipeline Lightweight Style Transfer
- URL: http://arxiv.org/abs/2310.01321v1
- Date: Mon, 2 Oct 2023 16:29:49 GMT
- Title: Color and Texture Dual Pipeline Lightweight Style Transfer
- Authors: ShiQi Jiang
- Abstract summary: Style transfer methods typically generate a single stylized output of color and texture coupling for reference styles.
We propose a Color and Texture Dual Pipeline Lightweight Style Transfer CTDP method, which employs a dual pipeline method to simultaneously output the results of color and texture transfer.
In comparative experiments, the color and texture transfer results generated by CTDP both achieve state-of-the-art performance.
- Score: 1.1863107884314108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Style transfer methods typically generate a single stylized output of color
and texture coupling for reference styles, and color transfer schemes may
introduce distortion or artifacts when processing reference images with
duplicate textures. To solve the problem, we propose a Color and Texture Dual
Pipeline Lightweight Style Transfer CTDP method, which employs a dual pipeline
method to simultaneously output the results of color and texture transfer.
Furthermore, we designed a masked total variation loss to suppress artifacts
and small texture representations in color transfer results without affecting
the semantic part of the content. More importantly, we are able to add texture
structures with controllable intensity to color transfer results for the first
time. Finally, we conducted feature visualization analysis on the texture
generation mechanism of the framework and found that smoothing the input image
can almost completely eliminate this texture structure. In comparative
experiments, the color and texture transfer results generated by CTDP both
achieve state-of-the-art performance. Additionally, the weight of the color
transfer branch model size is as low as 20k, which is 100-1500 times smaller
than that of other state-of-the-art models.
Related papers
- FabricDiffusion: High-Fidelity Texture Transfer for 3D Garments Generation from In-The-Wild Clothing Images [56.63824638417697]
FabricDiffusion is a method for transferring fabric textures from a single clothing image to 3D garments of arbitrary shapes.
We show that FabricDiffusion can transfer various features from a single clothing image including texture patterns, material properties, and detailed prints and logos.
arXiv Detail & Related papers (2024-10-02T17:57:12Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - ConTex-Human: Free-View Rendering of Human from a Single Image with
Texture-Consistent Synthesis [49.28239918969784]
We introduce a texture-consistent back view synthesis module that could transfer the reference image content to the back view.
We also propose a visibility-aware patch consistency regularization for texture mapping and refinement combined with the synthesized back view texture.
arXiv Detail & Related papers (2023-11-28T13:55:53Z) - Lightweight texture transfer based on texture feature preset [1.1863107884314108]
We propose a lightweight texture transfer based on texture feature preset.
The results show visually superior results but also reduces the model size by 3.2-3538 times and speeds up the process by 1.8-5.6 times.
arXiv Detail & Related papers (2023-06-29T10:37:29Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - Texture for Colors: Natural Representations of Colors Using Variable
Bit-Depth Textures [13.180922099929765]
We present an automated method to transform an image to a set of binary textures that represent not only the intensities, but also the colors of the original.
The system yields aesthetically pleasing binary images when tested on a variety of image sources.
arXiv Detail & Related papers (2021-05-04T21:22:02Z) - A deep learning based interactive sketching system for fashion images
design [47.09122395308728]
We propose an interactive system to design diverse high-quality garment images from fashion sketches and the texture information.
The major challenge behind this system is to generate high-quality and detailed texture according to the user-provided texture information.
In particular, we propose a novel bi-colored edge texture representation to synthesize textured garment images and a shading enhancer to render shading based on the grayscale edges.
arXiv Detail & Related papers (2020-10-09T07:50:56Z) - Region-adaptive Texture Enhancement for Detailed Person Image Synthesis [86.69934638569815]
RATE-Net is a novel framework for synthesizing person images with sharp texture details.
The proposed framework leverages an additional texture enhancing module to extract appearance information from the source image.
Experiments conducted on DeepFashion benchmark dataset have demonstrated the superiority of our framework compared with existing networks.
arXiv Detail & Related papers (2020-05-26T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.