Quality Metric Guided Portrait Line Drawing Generation from Unpaired
Training Data
- URL: http://arxiv.org/abs/2202.03678v1
- Date: Tue, 8 Feb 2022 06:49:57 GMT
- Title: Quality Metric Guided Portrait Line Drawing Generation from Unpaired
Training Data
- Authors: Ran Yi, Yong-Jin Liu, Yu-Kun Lai, Paul L. Rosin
- Abstract summary: We propose a novel method to automatically transform face photos to portrait drawings using unpaired training data.
Our method can (1) learn to generate high quality portrait drawings in multiple styles using a single network and (2) generate portrait drawings in a "new style" unseen in the training data.
- Score: 88.78171717494688
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face portrait line drawing is a unique style of art which is highly abstract
and expressive. However, due to its high semantic constraints, many existing
methods learn to generate portrait drawings using paired training data, which
is costly and time-consuming to obtain. In this paper, we propose a novel
method to automatically transform face photos to portrait drawings using
unpaired training data with two new features; i.e., our method can (1) learn to
generate high quality portrait drawings in multiple styles using a single
network and (2) generate portrait drawings in a "new style" unseen in the
training data. To achieve these benefits, we (1) propose a novel quality metric
for portrait drawings which is learned from human perception, and (2) introduce
a quality loss to guide the network toward generating better looking portrait
drawings. We observe that existing unpaired translation methods such as
CycleGAN tend to embed invisible reconstruction information indiscriminately in
the whole drawings due to significant information imbalance between the photo
and portrait drawing domains, which leads to important facial features missing.
To address this problem, we propose a novel asymmetric cycle mapping that
enforces the reconstruction information to be visible and only embedded in the
selected facial regions. Along with localized discriminators for important
facial regions, our method well preserves all important facial features in the
generated drawings. Generator dissection further explains that our model learns
to incorporate face semantic information during drawing generation. Extensive
experiments including a user study show that our model outperforms
state-of-the-art methods.
Related papers
- Stylized Face Sketch Extraction via Generative Prior with Limited Data [6.727433982111717]
StyleSketch is a method for extracting high-resolution stylized sketches from a face image.
Using the rich semantics of the deep features from a pretrained StyleGAN, we are able to train a sketch generator with 16 pairs of face and the corresponding sketch images.
arXiv Detail & Related papers (2024-03-17T16:25:25Z) - PatternPortrait: Draw Me Like One of Your Scribbles [2.01243755755303]
This paper introduces a process for generating abstract portrait drawings from pictures.
Their unique style is created by utilizing single freehand pattern sketches as references to generate unique patterns for shading.
The method involves extracting facial and body features from images and transforming them into vector lines.
arXiv Detail & Related papers (2024-01-22T12:33:11Z) - Enhancing the Authenticity of Rendered Portraits with
Identity-Consistent Transfer Learning [30.64677966402945]
We present a novel photo-realistic portrait generation framework that can effectively mitigate the ''uncanny valley'' effect.
Our key idea is to employ transfer learning to learn an identity-consistent mapping from the latent space of rendered portraits to that of real portraits.
arXiv Detail & Related papers (2023-10-06T12:20:40Z) - Sketch2Saliency: Learning to Detect Salient Objects from Human Drawings [99.9788496281408]
We study how sketches can be used as a weak label to detect salient objects present in an image.
To accomplish this, we introduce a photo-to-sketch generation model that aims to generate sequential sketch coordinates corresponding to a given visual photo.
Tests prove our hypothesis and delineate how our sketch-based saliency detection model gives a competitive performance compared to the state-of-the-art.
arXiv Detail & Related papers (2023-03-20T23:46:46Z) - CtlGAN: Few-shot Artistic Portraits Generation with Contrastive Transfer
Learning [77.27821665339492]
CtlGAN is a new few-shot artistic portraits generation model with a novel contrastive transfer learning strategy.
We adapt a pretrained StyleGAN in the source domain to a target artistic domain with no more than 10 artistic faces.
We propose a new encoder which embeds real faces into Z+ space and proposes a dual-path training strategy to better cope with the adapted decoder.
arXiv Detail & Related papers (2022-03-16T13:28:17Z) - Face sketch to photo translation using generative adversarial networks [1.0312968200748118]
We use a pre-trained face photo generating model to synthesize high-quality natural face photos.
We train a network to map the facial features extracted from the input sketch to a vector in the latent space of the face generating model.
The proposed model achieved 0.655 in the SSIM index and 97.59% rank-1 face recognition rate.
arXiv Detail & Related papers (2021-10-23T20:01:20Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - SketchEmbedNet: Learning Novel Concepts by Imitating Drawings [125.45799722437478]
We explore properties of image representations learned by training a model to produce sketches of images.
We show that this generative, class-agnostic model produces informative embeddings of images from novel examples, classes, and even novel datasets in a few-shot setting.
arXiv Detail & Related papers (2020-08-27T16:43:28Z) - Deep Self-Supervised Representation Learning for Free-Hand Sketch [51.101565480583304]
We tackle the problem of self-supervised representation learning for free-hand sketches.
Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs.
We show that the proposed approach outperforms the state-of-the-art unsupervised representation learning methods.
arXiv Detail & Related papers (2020-02-03T16:28:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.