Learning to Caricature via Semantic Shape Transform
- URL: http://arxiv.org/abs/2008.05090v2
- Date: Thu, 13 Aug 2020 06:58:02 GMT
- Title: Learning to Caricature via Semantic Shape Transform
- Authors: Wenqing Chu, Wei-Chih Hung, Yi-Hsuan Tsai, Yu-Ting Chang, Yijun Li,
Deng Cai, Ming-Hsuan Yang
- Abstract summary: We propose an algorithm based on a semantic shape transform to produce shape exaggerations.
We show that the proposed framework is able to render visually pleasing shape exaggerations while maintaining their facial structures.
- Score: 95.25116681761142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Caricature is an artistic drawing created to abstract or exaggerate facial
features of a person. Rendering visually pleasing caricatures is a difficult
task that requires professional skills, and thus it is of great interest to
design a method to automatically generate such drawings. To deal with large
shape changes, we propose an algorithm based on a semantic shape transform to
produce diverse and plausible shape exaggerations. Specifically, we predict
pixel-wise semantic correspondences and perform image warping on the input
photo to achieve dense shape transformation. We show that the proposed
framework is able to render visually pleasing shape exaggerations while
maintaining their facial structures. In addition, our model allows users to
manipulate the shape via the semantic map. We demonstrate the effectiveness of
our approach on a large photograph-caricature benchmark dataset with
comparisons to the state-of-the-art methods.
Related papers
- Image Collage on Arbitrary Shape via Shape-Aware Slicing and
Optimization [6.233023267175408]
We present a shape slicing algorithm and an optimization scheme that can create image collages of arbitrary shapes.
Shape-Aware Slicing, which is designed specifically for irregular shapes, takes human perception and shape structure into account to generate visually pleasing partitions.
arXiv Detail & Related papers (2023-11-17T09:41:30Z) - Differentiable Drawing and Sketching [0.0]
We present a differentiable relaxation of the process of drawing points, lines and curves into a pixel.
This relaxation allows end-to-end differentiable programs and deep networks to be learned and optimised.
arXiv Detail & Related papers (2021-03-30T09:25:55Z) - Unsupervised Contrastive Photo-to-Caricature Translation based on
Auto-distortion [49.93278173824292]
Photo-to-caricature aims to synthesize the caricature as a rendered image exaggerating the features through sketching, pencil strokes, or other artistic drawings.
Style rendering and geometry deformation are the most important aspects in photo-to-caricature translation task.
We propose an unsupervised contrastive photo-to-caricature translation architecture.
arXiv Detail & Related papers (2020-11-10T08:14:36Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - Deep Generation of Face Images from Sketches [36.146494762987146]
Deep image-to-image translation techniques allow fast generation of face images from freehand sketches.
Existing solutions tend to overfit to sketches, thus requiring professional sketches or even edge maps as input.
We propose to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch.
Our method essentially uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches.
arXiv Detail & Related papers (2020-06-01T16:20:23Z) - Image Morphing with Perceptual Constraints and STN Alignment [70.38273150435928]
We propose a conditional GAN morphing framework operating on a pair of input images.
A special training protocol produces sequences of frames, combined with a perceptual similarity loss, promote smooth transformation over time.
We provide comparisons to classic as well as latent space morphing techniques, and demonstrate that, given a set of images for self-supervision, our network learns to generate visually pleasing morphing effects.
arXiv Detail & Related papers (2020-04-29T10:49:10Z) - Fine-grained Image-to-Image Transformation towards Visual Recognition [102.51124181873101]
We aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image.
We adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image.
Experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models.
arXiv Detail & Related papers (2020-01-12T05:26:47Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.