High-Quality Face Caricature via Style Translation
- URL: http://arxiv.org/abs/2311.13338v1
- Date: Wed, 22 Nov 2023 12:03:33 GMT
- Title: High-Quality Face Caricature via Style Translation
- Authors: Lamyanba Laishram, Muhammad Shaheryar, Jong Taek Lee, and Soon Ki Jung
- Abstract summary: We propose a high-quality, unpaired face caricature method that is appropriate for use in the real world.
We attain the exaggeration of facial features and the stylization of appearance through a two-step process.
The Face caricature projection employs an encoder trained with real and caricature faces with the pretrained generator to project real and caricature faces.
- Score: 1.3457834965263997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Caricature is an exaggerated form of artistic portraiture that accentuates
unique yet subtle characteristics of human faces. Recently, advancements in
deep end-to-end techniques have yielded encouraging outcomes in capturing both
style and elevated exaggerations in creating face caricatures. Most of these
approaches tend to produce cartoon-like results that could be more practical
for real-world applications. In this study, we proposed a high-quality,
unpaired face caricature method that is appropriate for use in the real world
and uses computer vision techniques and GAN models. We attain the exaggeration
of facial features and the stylization of appearance through a two-step
process: Face caricature generation and face caricature projection. The face
caricature generation step creates new caricature face datasets from real
images and trains a generative model using the real and newly created
caricature datasets. The Face caricature projection employs an encoder trained
with real and caricature faces with the pretrained generator to project real
and caricature faces. We perform an incremental facial exaggeration from the
real image to the caricature faces using the encoder and generator's latent
space. Our projection preserves the facial identity, attributes, and
expressions from the input image. Also, it accounts for facial occlusions, such
as reading glasses or sunglasses, to enhance the robustness of our model.
Furthermore, we conducted a comprehensive comparison of our approach with
various state-of-the-art face caricature methods, highlighting our process's
distinctiveness and exceptional realism.
Related papers
- GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations [54.94362657501809]
We propose a new method to generate highly dynamic and deformable human head avatars from multi-view imagery in real-time.
At the core of our method is a hierarchical representation of head models that allows to capture the complex dynamics of facial expressions and head movements.
We train this coarse-to-fine facial avatar model along with the head pose as a learnable parameter in an end-to-end framework.
arXiv Detail & Related papers (2024-09-18T13:05:43Z) - Towards Localized Fine-Grained Control for Facial Expression Generation [54.82883891478555]
Humans, particularly their faces, are central to content generation due to their ability to convey rich expressions and intent.
Current generative models mostly generate flat neutral expressions and characterless smiles without authenticity.
We propose the use of AUs (action units) for facial expression control in face generation.
arXiv Detail & Related papers (2024-07-25T18:29:48Z) - Real Face Video Animation Platform [8.766564778178564]
We propose a facial animation platform that enables real-time conversion from real human faces to cartoon-style faces.
Users can input a real face video or image and select their desired cartoon style.
The system will then automatically analyze facial features, execute necessary preprocessing, and invoke appropriate models to generate expressive anime-style faces.
arXiv Detail & Related papers (2024-07-12T14:17:41Z) - Generalizable Face Landmarking Guided by Conditional Face Warping [34.49985314656207]
We learn a generalizable face landmarker based on labeled real human faces and unlabeled stylized faces.
Our method outperforms existing state-of-the-art domain adaptation methods in face landmarking tasks.
arXiv Detail & Related papers (2024-04-18T16:53:08Z) - Image-to-Video Generation via 3D Facial Dynamics [78.01476554323179]
We present a versatile model, FaceAnime, for various video generation tasks from still images.
Our model is versatile for various AR/VR and entertainment applications, such as face video and face video prediction.
arXiv Detail & Related papers (2021-05-31T02:30:11Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - Modeling Caricature Expressions by 3D Blendshape and Dynamic Texture [58.78290175562601]
This paper presents a solution to the problem of deforming an artist-drawn caricature according to a given normal face expression.
The key of our solution is a novel method to model caricature expression, which extends traditional 3DMM representation to caricature domain.
The experiments demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-08-13T06:31:01Z) - FaR-GAN for One-Shot Face Reenactment [20.894596219099164]
We present a one-shot face reenactment model, FaR-GAN, that takes only one face image of any given source identity and a target expression as input.
The proposed method makes no assumptions about the source identity, facial expression, head pose, or even image background.
arXiv Detail & Related papers (2020-05-13T16:15:37Z) - AutoToon: Automatic Geometric Warping for Face Cartoon Generation [0.0]
We propose AutoToon, the first supervised deep learning method that yields high-quality warps for the warping component of caricatures.
In contrast to prior art, we leverage an SENet and spatial transformer module and train directly on artist warping fields.
We achieve appealing exaggerations that amplify distinguishing features of the face while preserving facial detail.
arXiv Detail & Related papers (2020-04-06T02:27:51Z) - 3D-CariGAN: An End-to-End Solution to 3D Caricature Generation from Face
Photos [78.14395302760148]
We propose an end-to-end deep neural network model that generates high-quality 3D caricatures directly from a normal 2D face photo.
Experiments including a novel two-level user study show that our system can generate high-quality 3D caricatures directly from normal face photos.
arXiv Detail & Related papers (2020-03-15T14:42:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.