Bridging Unpaired Facial Photos And Sketches By Line-drawings
- URL: http://arxiv.org/abs/2102.00635v2
- Date: Wed, 3 Feb 2021 03:53:42 GMT
- Title: Bridging Unpaired Facial Photos And Sketches By Line-drawings
- Authors: Meimei Shang, Fei Gao, Xiang Li, Jingjie Zhu, Lingna Dai
- Abstract summary: We propose a novel method to learn face sketch synthesis models by using unpaired data.
We map both photos and sketches to line-drawings by using a neural style transfer method.
Experimental results demonstrate that sRender can generate multi-style sketches, and significantly outperforms existing unpaired image-to-image translation methods.
- Score: 5.589846737887013
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a novel method to learn face sketch synthesis
models by using unpaired data. Our main idea is bridging the photo domain
$\mathcal{X}$ and the sketch domain $Y$ by using the line-drawing domain
$\mathcal{Z}$. Specially, we map both photos and sketches to line-drawings by
using a neural style transfer method, i.e. $F: \mathcal{X}/\mathcal{Y} \mapsto
\mathcal{Z}$. Consequently, we obtain \textit{pseudo paired data}
$(\mathcal{Z}, \mathcal{Y})$, and can learn the mapping $G:\mathcal{Z} \mapsto
\mathcal{Y}$ in a supervised learning manner. In the inference stage, given a
facial photo, we can first transfer it to a line-drawing and then to a sketch
by $G \circ F$. Additionally, we propose a novel stroke loss for generating
different types of strokes. Our method, termed sRender, accords well with human
artists' rendering process. Experimental results demonstrate that sRender can
generate multi-style sketches, and significantly outperforms existing unpaired
image-to-image translation methods.
Related papers
- Stylized Face Sketch Extraction via Generative Prior with Limited Data [6.727433982111717]
StyleSketch is a method for extracting high-resolution stylized sketches from a face image.
Using the rich semantics of the deep features from a pretrained StyleGAN, we are able to train a sketch generator with 16 pairs of face and the corresponding sketch images.
arXiv Detail & Related papers (2024-03-17T16:25:25Z) - SketchINR: A First Look into Sketches as Implicit Neural Representations [120.4152701687737]
We propose SketchINR, to advance the representation of vector sketches with implicit neural models.
A variable length vector sketch is compressed into a latent space of fixed dimension that implicitly encodes the underlying shape as a function of time and strokes.
For the first time, SketchINR emulates the human ability to reproduce a sketch with varying abstraction in terms of number and complexity of strokes.
arXiv Detail & Related papers (2024-03-14T12:49:29Z) - Picture that Sketch: Photorealistic Image Generation from Abstract
Sketches [109.69076457732632]
Given an abstract, deformed, ordinary sketch from untrained amateurs like you and me, this paper turns it into a photorealistic image.
We do not dictate an edgemap-like sketch to start with, but aim to work with abstract free-hand human sketches.
In doing so, we essentially democratise the sketch-to-photo pipeline, "picturing" a sketch regardless of how good you sketch.
arXiv Detail & Related papers (2023-03-20T14:49:03Z) - DiffFaceSketch: High-Fidelity Face Image Synthesis with Sketch-Guided
Latent Diffusion Model [8.1818090854822]
We introduce a Sketch-Guided Latent Diffusion Model (SGLDM), an LDM-based network architect trained on a paired sketch-face dataset.
SGLDM can synthesize high-quality face images with different expressions, facial accessories, and hairstyles from various sketches with different abstraction levels.
arXiv Detail & Related papers (2023-02-14T08:51:47Z) - Delving StyleGAN Inversion for Image Editing: A Foundation Latent Space
Viewpoint [76.00222741383375]
GAN inversion and editing via StyleGAN maps an input image into the embedding spaces ($mathcalW$, $mathcalW+$, and $mathcalF$) to simultaneously maintain image fidelity and meaningful manipulation.
Recent GAN inversion methods typically explore $mathcalW+$ and $mathcalF$ rather than $mathcalW$ to improve reconstruction fidelity while maintaining editability.
We introduce contrastive learning to align $mathcalW$ and the image space for precise latent
arXiv Detail & Related papers (2022-11-21T13:35:32Z) - SPAGHETTI: Editing Implicit Shapes Through Part Aware Generation [85.09014441196692]
We introduce a method for $mathbfE$diting $mathbfI$mplicit $mathbfS$hapes $mathbfT$hrough.
Our architecture allows for manipulation of implicit shapes by means of transforming, interpolating and combining shape segments together.
arXiv Detail & Related papers (2022-01-31T12:31:41Z) - SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches [95.45728042499836]
We propose a new paradigm of sketch-based image manipulation: mask-free local image manipulation.
Our model automatically predicts the target modification region and encodes it into a structure style vector.
A generator then synthesizes the new image content based on the style vector and sketch.
arXiv Detail & Related papers (2021-11-30T02:42:31Z) - Face Sketch Synthesis with Style Transfer using Pyramid Column Feature [22.03011875851739]
We propose a novel framework based on deep neural networks for face sketch synthesis from a photo.
A content image is first generated that outlines the shape of the face and the key facial features.
Textures and shadings are then added to enrich the details of the sketch.
arXiv Detail & Related papers (2020-09-18T08:15:55Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.