AgileGAN3D: Few-Shot 3D Portrait Stylization by Augmented Transfer
Learning
- URL: http://arxiv.org/abs/2303.14297v1
- Date: Fri, 24 Mar 2023 23:04:20 GMT
- Title: AgileGAN3D: Few-Shot 3D Portrait Stylization by Augmented Transfer
Learning
- Authors: Guoxian Song and Hongyi Xu and Jing Liu and Tiancheng Zhi and Yichun
Shi and Jianfeng Zhang and Zihang Jiang and Jiashi Feng and Shen Sang and
Linjie Luo
- Abstract summary: We propose a novel framework emphAgileGAN3D that can produce 3D artistically appealing portraits with detailed geometry.
New stylization can be obtained with just a few (around 20) unpaired 2D exemplars.
Our pipeline demonstrates strong capability in turning user photos into a diverse range of 3D artistic portraits.
- Score: 80.67196184480754
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While substantial progresses have been made in automated 2D portrait
stylization, admirable 3D portrait stylization from a single user photo remains
to be an unresolved challenge. One primary obstacle here is the lack of high
quality stylized 3D training data. In this paper, we propose a novel framework
\emph{AgileGAN3D} that can produce 3D artistically appealing and personalized
portraits with detailed geometry. New stylization can be obtained with just a
few (around 20) unpaired 2D exemplars. We achieve this by first leveraging
existing 2D stylization capabilities, \emph{style prior creation}, to produce a
large amount of augmented 2D style exemplars. These augmented exemplars are
generated with accurate camera pose labels, as well as paired real face images,
which prove to be critical for the downstream 3D stylization task. Capitalizing
on the recent advancement of 3D-aware GAN models, we perform \emph{guided
transfer learning} on a pretrained 3D GAN generator to produce
multi-view-consistent stylized renderings. In order to achieve 3D GAN inversion
that can preserve subject's identity well, we incorporate \emph{multi-view
consistency loss} in the training of our encoder. Our pipeline demonstrates
strong capability in turning user photos into a diverse range of 3D artistic
portraits. Both qualitative results and quantitative evaluations have been
conducted to show the superior performance of our method. Code and pretrained
models will be released for reproduction purpose.
Related papers
- AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image
Collections [78.81539337399391]
We present an animatable 3D-aware GAN that generates portrait images with controllable facial expression, head pose, and shoulder movements.
It is a generative model trained on unstructured 2D image collections without using 3D or video data.
A dual-camera rendering and adversarial learning scheme is proposed to improve the quality of the generated faces.
arXiv Detail & Related papers (2023-09-05T12:44:57Z) - Freestyle 3D-Aware Portrait Synthesis Based on Compositional Generative
Priors [12.663585627797863]
We propose a novel text-driven 3D-aware portrait synthesis framework.
Specifically, for a given portrait style prompt, we first composite two generative priors, a 3D-aware GAN generator and a text-guided image editor.
Then we map the special style domain of this set to our proposed 3D latent feature generator and obtain a 3D representation containing the given style information.
arXiv Detail & Related papers (2023-06-27T12:23:04Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch
to Portrait Generation [51.64832538714455]
Existing studies only generate portraits in the 2D plane with fixed views, making the results less vivid.
In this paper, we present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the possibility of creating Stereoscopic 3D-aware portraits.
Our key insight is to design sketch-aware constraints that can fully exploit the prior knowledge of a tri-plane-based 3D-aware generative model.
arXiv Detail & Related papers (2023-02-14T06:28:42Z) - 3DAvatarGAN: Bridging Domains for Personalized Editable Avatars [75.31960120109106]
3D-GANs synthesize geometry and texture by training on large-scale datasets with a consistent structure.
We propose an adaptation framework, where the source domain is a pre-trained 3D-GAN, while the target domain is a 2D-GAN trained on artistic datasets.
We show a deformation-based technique for modeling exaggerated geometry of artistic domains, enabling -- as a byproduct -- personalized geometric editing.
arXiv Detail & Related papers (2023-01-06T19:58:47Z) - Lifting 2D StyleGAN for 3D-Aware Face Generation [52.8152883980813]
We propose a framework, called LiftedGAN, that disentangles and lifts a pre-trained StyleGAN2 for 3D-aware face generation.
Our model is "3D-aware" in the sense that it is able to (1) disentangle the latent space of StyleGAN2 into texture, shape, viewpoint, lighting and (2) generate 3D components for synthetic images.
arXiv Detail & Related papers (2020-11-26T05:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.