CariMe: Unpaired Caricature Generation with Multiple Exaggerations
- URL: http://arxiv.org/abs/2010.00246v1
- Date: Thu, 1 Oct 2020 08:14:32 GMT
- Title: CariMe: Unpaired Caricature Generation with Multiple Exaggerations
- Authors: Zheng Gu, Chuanqi Dong, Jing Huo, Wenbin Li, Yang Gao
- Abstract summary: Caricature generation aims to translate real photos into caricatures with artistic styles and shape exaggerations.
Previous caricature generation methods are obsessed with predicting definite image warping from a given photo.
We propose a Multi-exaggeration Warper network to learn the distribution-level mapping from photo to facial exaggerations.
- Score: 22.342630945133312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Caricature generation aims to translate real photos into caricatures with
artistic styles and shape exaggerations while maintaining the identity of the
subject. Different from the generic image-to-image translation, drawing a
caricature automatically is a more challenging task due to the existence of
various spacial deformations. Previous caricature generation methods are
obsessed with predicting definite image warping from a given photo while
ignoring the intrinsic representation and distribution for exaggerations in
caricatures. This limits their ability on diverse exaggeration generation. In
this paper, we generalize the caricature generation problem from instance-level
warping prediction to distribution-level deformation modeling. Based on this
assumption, we present the first exploration for unpaired CARIcature generation
with Multiple Exaggerations (CariMe). Technically, we propose a
Multi-exaggeration Warper network to learn the distribution-level mapping from
photo to facial exaggerations. This makes it possible to generate diverse and
reasonable exaggerations from randomly sampled warp codes given one input
photo. To better represent the facial exaggeration and produce fine-grained
warping, a deformation-field-based warping method is also proposed, which helps
us to capture more detailed exaggerations than other point-based warping
methods. Experiments and two perceptual studies prove the superiority of our
method comparing with other state-of-the-art methods, showing the improvement
of our work on caricature generation.
Related papers
- Portrait Diffusion: Training-free Face Stylization with
Chain-of-Painting [64.43760427752532]
Face stylization refers to the transformation of a face into a specific portrait style.
Current methods require the use of example-based adaptation approaches to fine-tune pre-trained generative models.
This paper proposes a training-free face stylization framework, named Portrait Diffusion.
arXiv Detail & Related papers (2023-12-03T06:48:35Z) - High-Quality Face Caricature via Style Translation [1.3457834965263997]
We propose a high-quality, unpaired face caricature method that is appropriate for use in the real world.
We attain the exaggeration of facial features and the stylization of appearance through a two-step process.
The Face caricature projection employs an encoder trained with real and caricature faces with the pretrained generator to project real and caricature faces.
arXiv Detail & Related papers (2023-11-22T12:03:33Z) - Quality Metric Guided Portrait Line Drawing Generation from Unpaired
Training Data [88.78171717494688]
We propose a novel method to automatically transform face photos to portrait drawings using unpaired training data.
Our method can (1) learn to generate high quality portrait drawings in multiple styles using a single network and (2) generate portrait drawings in a "new style" unseen in the training data.
arXiv Detail & Related papers (2022-02-08T06:49:57Z) - Unsupervised Contrastive Photo-to-Caricature Translation based on
Auto-distortion [49.93278173824292]
Photo-to-caricature aims to synthesize the caricature as a rendered image exaggerating the features through sketching, pencil strokes, or other artistic drawings.
Style rendering and geometry deformation are the most important aspects in photo-to-caricature translation task.
We propose an unsupervised contrastive photo-to-caricature translation architecture.
arXiv Detail & Related papers (2020-11-10T08:14:36Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - Learning to Caricature via Semantic Shape Transform [95.25116681761142]
We propose an algorithm based on a semantic shape transform to produce shape exaggerations.
We show that the proposed framework is able to render visually pleasing shape exaggerations while maintaining their facial structures.
arXiv Detail & Related papers (2020-08-12T03:41:49Z) - AutoToon: Automatic Geometric Warping for Face Cartoon Generation [0.0]
We propose AutoToon, the first supervised deep learning method that yields high-quality warps for the warping component of caricatures.
In contrast to prior art, we leverage an SENet and spatial transformer module and train directly on artist warping fields.
We achieve appealing exaggerations that amplify distinguishing features of the face while preserving facial detail.
arXiv Detail & Related papers (2020-04-06T02:27:51Z) - MW-GAN: Multi-Warping GAN for Caricature Generation with Multi-Style
Geometric Exaggeration [53.98437317161086]
Given an input face photo, the goal of caricature generation is to produce stylized, exaggerated caricatures that share the same identity as the photo.
We propose a novel framework called Multi-Warping GAN (MW-GAN), including a style network and a geometric network.
Experiments show that caricatures generated by MW-GAN have better quality than existing methods.
arXiv Detail & Related papers (2020-01-07T03:08:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.