Enhancing the Authenticity of Rendered Portraits with
Identity-Consistent Transfer Learning
- URL: http://arxiv.org/abs/2310.04194v1
- Date: Fri, 6 Oct 2023 12:20:40 GMT
- Title: Enhancing the Authenticity of Rendered Portraits with
Identity-Consistent Transfer Learning
- Authors: Luyuan Wang, Yiqian Wu, Yongliang Yang, Chen Liu, Xiaogang Jin
- Abstract summary: We present a novel photo-realistic portrait generation framework that can effectively mitigate the ''uncanny valley'' effect.
Our key idea is to employ transfer learning to learn an identity-consistent mapping from the latent space of rendered portraits to that of real portraits.
- Score: 30.64677966402945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite rapid advances in computer graphics, creating high-quality
photo-realistic virtual portraits is prohibitively expensive. Furthermore, the
well-know ''uncanny valley'' effect in rendered portraits has a significant
impact on the user experience, especially when the depiction closely resembles
a human likeness, where any minor artifacts can evoke feelings of eeriness and
repulsiveness. In this paper, we present a novel photo-realistic portrait
generation framework that can effectively mitigate the ''uncanny valley''
effect and improve the overall authenticity of rendered portraits. Our key idea
is to employ transfer learning to learn an identity-consistent mapping from the
latent space of rendered portraits to that of real portraits. During the
inference stage, the input portrait of an avatar can be directly transferred to
a realistic portrait by changing its appearance style while maintaining the
facial identity. To this end, we collect a new dataset, Daz-Rendered-Faces-HQ
(DRFHQ), that is specifically designed for rendering-style portraits. We
leverage this dataset to fine-tune the StyleGAN2 generator, using our carefully
crafted framework, which helps to preserve the geometric and color features
relevant to facial identity. We evaluate our framework using portraits with
diverse gender, age, and race variations. Qualitative and quantitative
evaluations and ablation studies show the advantages of our method compared to
state-of-the-art approaches.
Related papers
- Follow-Your-Emoji: Fine-Controllable and Expressive Freestyle Portrait Animation [53.767090490974745]
Follow-Your-Emoji is a diffusion-based framework for portrait animation.
It animates a reference portrait with target landmark sequences.
Our method demonstrates significant performance in controlling the expression of freestyle portraits.
arXiv Detail & Related papers (2024-06-04T02:05:57Z) - AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation [4.568539181254851]
We propose AniPortrait, a framework for generating high-quality animation driven by audio and a reference portrait image.
Experimental results demonstrate the superiority of AniPortrait in terms of facial naturalness, pose diversity, and visual quality.
Our methodology exhibits considerable potential in terms of flexibility and controllability, which can be effectively applied in areas such as facial motion editing or face reenactment.
arXiv Detail & Related papers (2024-03-26T13:35:02Z) - MagiCapture: High-Resolution Multi-Concept Portrait Customization [34.131515004434846]
MagiCapture is a personalization method for integrating subject and style concepts to generate high-resolution portrait images.
We present a novel Attention Refocusing loss coupled with auxiliary priors, both of which facilitate robust learning within this weakly supervised learning setting.
Our pipeline also includes additional post-processing steps to ensure the creation of highly realistic outputs.
arXiv Detail & Related papers (2023-09-13T11:37:04Z) - Few-shots Portrait Generation with Style Enhancement and Identity
Preservation [3.6937810031393123]
StyleIdentityGAN model can ensure the identity and artistry of the generated portrait at the same time.
Style-enhanced module focuses on artistic style features decoupling and transferring to improve the artistry of generated virtual face images.
Experiments demonstrate the superiority of StyleIdentityGAN over state-of-art methods in artistry and identity effects.
arXiv Detail & Related papers (2023-03-01T10:02:12Z) - What's in a Decade? Transforming Faces Through Time [70.78847389726937]
We assemble the Faces Through Time dataset, which contains over a thousand portrait images from each decade, spanning the 1880s to the present day.
We present a framework for resynthesizing portrait images across time, imagining how a portrait taken during a particular decade might have looked like, had it been taken in other decades.
arXiv Detail & Related papers (2022-10-13T00:48:18Z) - Explicitly Controllable 3D-Aware Portrait Generation [42.30481422714532]
We propose a 3D portrait generation network that produces consistent portraits according to semantic parameters regarding pose, identity, expression and lighting.
Our method outperforms prior arts in extensive experiments, producing realistic portraits with vivid expression in natural lighting when viewed in free viewpoint.
arXiv Detail & Related papers (2022-09-12T17:40:08Z) - Portrait Interpretation and a Benchmark [49.484161789329804]
The proposed portrait interpretation recognizes the perception of humans from a new systematic perspective.
We construct a new dataset that contains 250,000 images labeled with identity, gender, age, physique, height, expression, and posture of the whole body and arms.
Our experimental results demonstrate that combining the tasks related to portrait interpretation can yield benefits.
arXiv Detail & Related papers (2022-07-27T06:25:09Z) - CtlGAN: Few-shot Artistic Portraits Generation with Contrastive Transfer
Learning [77.27821665339492]
CtlGAN is a new few-shot artistic portraits generation model with a novel contrastive transfer learning strategy.
We adapt a pretrained StyleGAN in the source domain to a target artistic domain with no more than 10 artistic faces.
We propose a new encoder which embeds real faces into Z+ space and proposes a dual-path training strategy to better cope with the adapted decoder.
arXiv Detail & Related papers (2022-03-16T13:28:17Z) - Quality Metric Guided Portrait Line Drawing Generation from Unpaired
Training Data [88.78171717494688]
We propose a novel method to automatically transform face photos to portrait drawings using unpaired training data.
Our method can (1) learn to generate high quality portrait drawings in multiple styles using a single network and (2) generate portrait drawings in a "new style" unseen in the training data.
arXiv Detail & Related papers (2022-02-08T06:49:57Z) - NPRportrait 1.0: A Three-Level Benchmark for Non-Photorealistic
Rendering of Portraits [67.58044348082944]
This paper proposes a new structured, three level, benchmark dataset for the evaluation of stylised portrait images.
Rigorous criteria were used for its construction, and its consistency was validated by user studies.
A new methodology has been developed for evaluating portrait stylisation algorithms.
arXiv Detail & Related papers (2020-09-01T18:04:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.