Content-Conditioned Generation of Stylized Free hand Sketches
- URL: http://arxiv.org/abs/2401.04739v1
- Date: Tue, 9 Jan 2024 05:57:35 GMT
- Title: Content-Conditioned Generation of Stylized Free hand Sketches
- Authors: Jiajun Liu, Siyuan Wang, Guangming Zhu, Liang Zhang, Ning Li and
Eryang Gao
- Abstract summary: In some special fields such as the military field, free-hand sketches are difficult to sample on a large scale.
We propose a novel adversarial generative network that can accurately generate realistic free-hand sketches with various styles.
- Score: 13.474666287535317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, the recognition of free-hand sketches has remained a popular
task. However, in some special fields such as the military field, free-hand
sketches are difficult to sample on a large scale. Common data augmentation and
image generation techniques are difficult to produce images with various
free-hand sketching styles. Therefore, the recognition and segmentation tasks
in related fields are limited. In this paper, we propose a novel adversarial
generative network that can accurately generate realistic free-hand sketches
with various styles. We explore the performance of the model, including using
styles randomly sampled from a prior normal distribution to generate images
with various free-hand sketching styles, disentangling the painters' styles
from known free-hand sketches to generate images with specific styles, and
generating images of unknown classes that are not in the training set. We
further demonstrate with qualitative and quantitative evaluations our
advantages in visual quality, content accuracy, and style imitation on
SketchIME.
Related papers
- Semi-supervised reference-based sketch extraction using a contrastive learning framework [6.20476217797034]
We propose a novel multi-modal sketch extraction method that can imitate the style of a given reference sketch with unpaired data training.
Our method outperforms state-of-the-art sketch extraction methods and unpaired image translation methods in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2024-07-19T04:51:34Z) - Stylized Face Sketch Extraction via Generative Prior with Limited Data [6.727433982111717]
StyleSketch is a method for extracting high-resolution stylized sketches from a face image.
Using the rich semantics of the deep features from a pretrained StyleGAN, we are able to train a sketch generator with 16 pairs of face and the corresponding sketch images.
arXiv Detail & Related papers (2024-03-17T16:25:25Z) - HAIFIT: Human-to-AI Fashion Image Translation [6.034505799418777]
We introduce HAIFIT, a novel approach that transforms sketches into high-fidelity, lifelike clothing images.
Our method excels in preserving the distinctive style and intricate details essential for fashion design applications.
arXiv Detail & Related papers (2024-03-13T16:06:07Z) - Customize StyleGAN with One Hand Sketch [0.0]
We propose a framework to control StyleGAN imagery with a single user sketch.
We learn a conditional distribution in the latent space of a pre-trained StyleGAN model via energy-based learning.
Our model can generate multi-modal images semantically aligned with the input sketch.
arXiv Detail & Related papers (2023-10-29T09:32:33Z) - Adaptively-Realistic Image Generation from Stroke and Sketch with
Diffusion Model [31.652827838300915]
We propose a unified framework supporting a three-dimensional control over the image synthesis from sketches and strokes based on diffusion models.
Our framework achieves state-of-the-art performance while providing flexibility in generating customized images with control over shape, color, and realism.
Our method unleashes applications such as editing on real images, generation with partial sketches and strokes, and multi-domain multi-modal synthesis.
arXiv Detail & Related papers (2022-08-26T13:59:26Z) - Quality Metric Guided Portrait Line Drawing Generation from Unpaired
Training Data [88.78171717494688]
We propose a novel method to automatically transform face photos to portrait drawings using unpaired training data.
Our method can (1) learn to generate high quality portrait drawings in multiple styles using a single network and (2) generate portrait drawings in a "new style" unseen in the training data.
arXiv Detail & Related papers (2022-02-08T06:49:57Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - SketchEmbedNet: Learning Novel Concepts by Imitating Drawings [125.45799722437478]
We explore properties of image representations learned by training a model to produce sketches of images.
We show that this generative, class-agnostic model produces informative embeddings of images from novel examples, classes, and even novel datasets in a few-shot setting.
arXiv Detail & Related papers (2020-08-27T16:43:28Z) - SketchyCOCO: Image Generation from Freehand Scene Sketches [71.85577739612579]
We introduce the first method for automatic image generation from scene-level freehand sketches.
Key contribution is an attribute vector bridged Geneversarative Adrial Network called EdgeGAN.
We have built a large-scale composite dataset called SketchyCOCO to support and evaluate the solution.
arXiv Detail & Related papers (2020-03-05T14:54:10Z) - Deep Self-Supervised Representation Learning for Free-Hand Sketch [51.101565480583304]
We tackle the problem of self-supervised representation learning for free-hand sketches.
Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs.
We show that the proposed approach outperforms the state-of-the-art unsupervised representation learning methods.
arXiv Detail & Related papers (2020-02-03T16:28:29Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.