Cross-Domain Style Mixing for Face Cartoonization
- URL: http://arxiv.org/abs/2205.12450v1
- Date: Wed, 25 May 2022 02:39:10 GMT
- Title: Cross-Domain Style Mixing for Face Cartoonization
- Authors: Seungkwon Kim, Chaeheon Gwak, Dohyun Kim, Kwangho Lee, Jihye Back,
Namhyuk Ahn, Daesik Kim
- Abstract summary: We propose a novel method called Cross-domain Style mixing, which combines two latent codes from two different domains.
Our method effectively stylizes faces into multiple cartoon characters at various face abstraction levels using only a single generator.
- Score: 6.174413879403037
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Cartoon domain has recently gained increasing popularity. Previous studies
have attempted quality portrait stylization into the cartoon domain; however,
this poses a great challenge since they have not properly addressed the
critical constraints, such as requiring a large number of training images or
the lack of support for abstract cartoon faces. Recently, a layer swapping
method has been used for stylization requiring only a limited number of
training images; however, its use cases are still narrow as it inherits the
remaining issues. In this paper, we propose a novel method called Cross-domain
Style mixing, which combines two latent codes from two different domains. Our
method effectively stylizes faces into multiple cartoon characters at various
face abstraction levels using only a single generator without even using a
large number of training images.
Related papers
- Stylized Face Sketch Extraction via Generative Prior with Limited Data [6.727433982111717]
StyleSketch is a method for extracting high-resolution stylized sketches from a face image.
Using the rich semantics of the deep features from a pretrained StyleGAN, we are able to train a sketch generator with 16 pairs of face and the corresponding sketch images.
arXiv Detail & Related papers (2024-03-17T16:25:25Z) - ToonAging: Face Re-Aging upon Artistic Portrait Style Transfer [6.305926064192544]
We introduce a novel one-stage method for face re-aging combined with portrait style transfer.
We leverage existing face re-aging and style transfer networks, both trained within the same PR domain.
Our method offers greater flexibility compared to domain-level fine-tuning approaches.
arXiv Detail & Related papers (2024-02-05T05:25:33Z) - Portrait Diffusion: Training-free Face Stylization with
Chain-of-Painting [64.43760427752532]
Face stylization refers to the transformation of a face into a specific portrait style.
Current methods require the use of example-based adaptation approaches to fine-tune pre-trained generative models.
This paper proposes a training-free face stylization framework, named Portrait Diffusion.
arXiv Detail & Related papers (2023-12-03T06:48:35Z) - ToonTalker: Cross-Domain Face Reenactment [80.52472147553333]
Cross-domain face reenactment involves driving a cartoon image with the video of a real person and vice versa.
Recently, many works have focused on one-shot talking face generation to drive a portrait with a real video.
We propose a transformer-based framework to align the motions from different domains into a common latent space.
arXiv Detail & Related papers (2023-08-24T15:43:14Z) - CtlGAN: Few-shot Artistic Portraits Generation with Contrastive Transfer
Learning [77.27821665339492]
CtlGAN is a new few-shot artistic portraits generation model with a novel contrastive transfer learning strategy.
We adapt a pretrained StyleGAN in the source domain to a target artistic domain with no more than 10 artistic faces.
We propose a new encoder which embeds real faces into Z+ space and proposes a dual-path training strategy to better cope with the adapted decoder.
arXiv Detail & Related papers (2022-03-16T13:28:17Z) - AniGAN: Style-Guided Generative Adversarial Networks for Unsupervised
Anime Face Generation [84.52819242283852]
We propose a novel framework to translate a portrait photo-face into an anime appearance.
Our aim is to synthesize anime-faces which are style-consistent with a given reference anime-face.
Existing methods often fail to transfer the styles of reference anime-faces, or introduce noticeable artifacts/distortions in the local shapes of their generated faces.
arXiv Detail & Related papers (2021-02-24T22:47:38Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - MW-GAN: Multi-Warping GAN for Caricature Generation with Multi-Style
Geometric Exaggeration [53.98437317161086]
Given an input face photo, the goal of caricature generation is to produce stylized, exaggerated caricatures that share the same identity as the photo.
We propose a novel framework called Multi-Warping GAN (MW-GAN), including a style network and a geometric network.
Experiments show that caricatures generated by MW-GAN have better quality than existing methods.
arXiv Detail & Related papers (2020-01-07T03:08:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.