Deep Generation of Face Images from Sketches
- URL: http://arxiv.org/abs/2006.01047v2
- Date: Fri, 5 Jun 2020 02:37:46 GMT
- Title: Deep Generation of Face Images from Sketches
- Authors: Shu-Yu Chen, Wanchao Su, Lin Gao, Shihong Xia, Hongbo Fu
- Abstract summary: Deep image-to-image translation techniques allow fast generation of face images from freehand sketches.
Existing solutions tend to overfit to sketches, thus requiring professional sketches or even edge maps as input.
We propose to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch.
Our method essentially uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches.
- Score: 36.146494762987146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent deep image-to-image translation techniques allow fast generation of
face images from freehand sketches. However, existing solutions tend to overfit
to sketches, thus requiring professional sketches or even edge maps as input.
To address this issue, our key idea is to implicitly model the shape space of
plausible face images and synthesize a face image in this space to approximate
an input sketch. We take a local-to-global approach. We first learn feature
embeddings of key face components, and push corresponding parts of input
sketches towards underlying component manifolds defined by the feature vectors
of face component samples. We also propose another deep neural network to learn
the mapping from the embedded component features to realistic images with
multi-channel feature maps as intermediate results to improve the information
flow. Our method essentially uses input sketches as soft constraints and is
thus able to produce high-quality face images even from rough and/or incomplete
sketches. Our tool is easy to use even for non-artists, while still supporting
fine-grained control of shape details. Both qualitative and quantitative
evaluations show the superior generation ability of our system to existing and
alternative solutions. The usability and expressiveness of our system are
confirmed by a user study.
Related papers
- Stylized Face Sketch Extraction via Generative Prior with Limited Data [6.727433982111717]
StyleSketch is a method for extracting high-resolution stylized sketches from a face image.
Using the rich semantics of the deep features from a pretrained StyleGAN, we are able to train a sketch generator with 16 pairs of face and the corresponding sketch images.
arXiv Detail & Related papers (2024-03-17T16:25:25Z) - Block and Detail: Scaffolding Sketch-to-Image Generation [65.56590359051634]
We introduce a novel sketch-to-image tool that aligns with the iterative refinement process of artists.
Our tool lets users sketch blocking strokes to coarsely represent the placement and form of objects and detail strokes to refine their shape and silhouettes.
We develop a two-pass algorithm for generating high-fidelity images from such sketches at any point in the iterative process.
arXiv Detail & Related papers (2024-02-28T07:09:31Z) - CustomSketching: Sketch Concept Extraction for Sketch-based Image
Synthesis and Editing [21.12815542848095]
Personalization techniques for large text-to-image (T2I) models allow users to incorporate new concepts from reference images.
Existing methods primarily rely on textual descriptions, leading to limited control over customized images.
We identify sketches as an intuitive and versatile representation that can facilitate such control.
arXiv Detail & Related papers (2024-02-27T15:52:59Z) - Learning Position-Aware Implicit Neural Network for Real-World Face
Inpainting [55.87303287274932]
Face inpainting requires the model to have a precise global understanding of the facial position structure.
In this paper, we propose an textbfImplicit textbfNeural textbfInpainting textbfNetwork (IN$2$) to handle arbitrary-shape face images in real-world scenarios.
arXiv Detail & Related papers (2024-01-19T07:31:44Z) - DiffFaceSketch: High-Fidelity Face Image Synthesis with Sketch-Guided
Latent Diffusion Model [8.1818090854822]
We introduce a Sketch-Guided Latent Diffusion Model (SGLDM), an LDM-based network architect trained on a paired sketch-face dataset.
SGLDM can synthesize high-quality face images with different expressions, facial accessories, and hairstyles from various sketches with different abstraction levels.
arXiv Detail & Related papers (2023-02-14T08:51:47Z) - Sketch-Guided Text-to-Image Diffusion Models [57.12095262189362]
We introduce a universal approach to guide a pretrained text-to-image diffusion model.
Our method does not require to train a dedicated model or a specialized encoder for the task.
We take a particular focus on the sketch-to-image translation task, revealing a robust and expressive way to generate images.
arXiv Detail & Related papers (2022-11-24T18:45:32Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.