DeepFacePencil: Creating Face Images from Freehand Sketches
- URL: http://arxiv.org/abs/2008.13343v1
- Date: Mon, 31 Aug 2020 03:35:21 GMT
- Title: DeepFacePencil: Creating Face Images from Freehand Sketches
- Authors: Yuhang Li and Xuejin Chen and Binxin Yang and Zihan Chen and Zhihua
Cheng and Zheng-Jun Zha
- Abstract summary: Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
- Score: 77.00929179469559
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we explore the task of generating photo-realistic face images
from hand-drawn sketches. Existing image-to-image translation methods require a
large-scale dataset of paired sketches and images for supervision. They
typically utilize synthesized edge maps of face images as training data.
However, these synthesized edge maps strictly align with the edges of the
corresponding face images, which limit their generalization ability to real
hand-drawn sketches with vast stroke diversity. To address this problem, we
propose DeepFacePencil, an effective tool that is able to generate
photo-realistic face images from hand-drawn sketches, based on a novel dual
generator image translation network during training. A novel spatial attention
pooling (SAP) is designed to adaptively handle stroke distortions which are
spatially varying to support various stroke styles and different levels of
details. We conduct extensive experiments and the results demonstrate the
superiority of our model over existing methods on both image quality and model
generalization to hand-drawn sketches.
Related papers
- Stylized Face Sketch Extraction via Generative Prior with Limited Data [6.727433982111717]
StyleSketch is a method for extracting high-resolution stylized sketches from a face image.
Using the rich semantics of the deep features from a pretrained StyleGAN, we are able to train a sketch generator with 16 pairs of face and the corresponding sketch images.
arXiv Detail & Related papers (2024-03-17T16:25:25Z) - Picture that Sketch: Photorealistic Image Generation from Abstract
Sketches [109.69076457732632]
Given an abstract, deformed, ordinary sketch from untrained amateurs like you and me, this paper turns it into a photorealistic image.
We do not dictate an edgemap-like sketch to start with, but aim to work with abstract free-hand human sketches.
In doing so, we essentially democratise the sketch-to-photo pipeline, "picturing" a sketch regardless of how good you sketch.
arXiv Detail & Related papers (2023-03-20T14:49:03Z) - DeepPortraitDrawing: Generating Human Body Images from Freehand Sketches [75.4318318890065]
We present DeepDrawing, a framework for converting roughly drawn sketches to realistic human body images.
To encode complicated body shapes under various poses, we take a local-to-global approach.
Our method produces more realistic images than the state-of-the-art sketch-to-image synthesis techniques.
arXiv Detail & Related papers (2022-05-04T14:02:45Z) - Quality Metric Guided Portrait Line Drawing Generation from Unpaired
Training Data [88.78171717494688]
We propose a novel method to automatically transform face photos to portrait drawings using unpaired training data.
Our method can (1) learn to generate high quality portrait drawings in multiple styles using a single network and (2) generate portrait drawings in a "new style" unseen in the training data.
arXiv Detail & Related papers (2022-02-08T06:49:57Z) - dualFace:Two-Stage Drawing Guidance for Freehand Portrait Sketching [8.83917959649942]
dualFace consists of two-stage drawing assistance to provide global and local visual guidance.
In the stage of global guidance, the user draws several contour lines, and dualFace displays the suggested face contour lines over the background of the canvas.
In the stage of local guidance, we synthesize detailed portrait images with a deep generative model from user-drawn contour lines, but use the synthesized results as detailed drawing guidance.
arXiv Detail & Related papers (2021-04-26T00:56:37Z) - Learning to Caricature via Semantic Shape Transform [95.25116681761142]
We propose an algorithm based on a semantic shape transform to produce shape exaggerations.
We show that the proposed framework is able to render visually pleasing shape exaggerations while maintaining their facial structures.
arXiv Detail & Related papers (2020-08-12T03:41:49Z) - Deep Generation of Face Images from Sketches [36.146494762987146]
Deep image-to-image translation techniques allow fast generation of face images from freehand sketches.
Existing solutions tend to overfit to sketches, thus requiring professional sketches or even edge maps as input.
We propose to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch.
Our method essentially uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches.
arXiv Detail & Related papers (2020-06-01T16:20:23Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.