3D Face Arbitrary Style Transfer
- URL: http://arxiv.org/abs/2303.07709v1
- Date: Tue, 14 Mar 2023 08:51:51 GMT
- Title: 3D Face Arbitrary Style Transfer
- Authors: Xiangwen Deng, Yingshuang Zou, Yuanhao Cai, Chendong Zhao, Yang Liu,
Zhifang Liu, Yuxiao Liu, Jiawei Zhou, Haoqian Wang
- Abstract summary: We propose a novel method, namely Face-guided Dual Style Transfer (FDST)
FDST employs a 3D decoupling module to separate facial geometry and texture.
We show that FDST can be applied in many downstream tasks, including region-controllable style transfer, high-fidelity face texture reconstruction, and artistic face reconstruction.
- Score: 18.09280257466941
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Style transfer of 3D faces has gained more and more attention. However,
previous methods mainly use images of artistic faces for style transfer while
ignoring arbitrary style images such as abstract paintings. To solve this
problem, we propose a novel method, namely Face-guided Dual Style Transfer
(FDST). To begin with, FDST employs a 3D decoupling module to separate facial
geometry and texture. Then we propose a style fusion strategy for facial
geometry. Subsequently, we design an optimization-based DDSG mechanism for
textures that can guide the style transfer by two style images. Besides the
normal style image input, DDSG can utilize the original face input as another
style input as the face prior. By this means, high-quality face arbitrary style
transfer results can be obtained. Furthermore, FDST can be applied in many
downstream tasks, including region-controllable style transfer, high-fidelity
face texture reconstruction, large-pose face reconstruction, and artistic face
reconstruction. Comprehensive quantitative and qualitative results show that
our method can achieve comparable performance. All source codes and pre-trained
weights will be released to the public.
Related papers
- High-Fidelity Face Swapping with Style Blending [16.024260677867076]
We propose an innovative end-to-end framework for high-fidelity face swapping.
First, we introduce a StyleGAN-based facial attributes encoder that extracts essential features from faces and inverts them into a latent style code.
Second, we introduce an attention-based style blending module to effectively transfer Face IDs from source to target.
arXiv Detail & Related papers (2023-12-17T23:22:37Z) - 3D Face Style Transfer with a Hybrid Solution of NeRF and Mesh
Rasterization [4.668492532161309]
We propose to use a neural radiance field (NeRF) to represent 3D human face and combine it with 2D style transfer to stylize the 3D face.
We find that directly training a NeRF on stylized images from 2D style transfer brings in 3D inconsistency issue and causes blurriness.
We propose a hybrid framework of NeRF and meshization to combine the benefits of high fidelity geometry reconstruction of NeRF and fast speed rendering of mesh.
arXiv Detail & Related papers (2023-11-22T05:24:35Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions [11.153966202832933]
We apply style transfer on mesh reconstructions of indoor scenes.
This enables VR applications like experiencing 3D environments painted in the style of a favorite artist.
arXiv Detail & Related papers (2021-12-02T18:59:59Z) - SAFA: Structure Aware Face Animation [9.58882272014749]
We propose a structure aware face animation (SAFA) method which constructs specific geometric structures to model different components of a face image.
We use a 3D morphable model (3DMM) to model the face, multiple affine transforms to model the other foreground components like hair and beard, and an identity transform to model the background.
The 3DMM geometric embedding not only helps generate realistic structure for the driving scene, but also contributes to better perception of occluded area in the generated image.
arXiv Detail & Related papers (2021-11-09T03:22:38Z) - MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation [69.35523133292389]
We propose a framework that a priori models physical attributes of the face explicitly, thus providing disentanglement by design.
Our method, MOST-GAN, integrates the expressive power and photorealism of style-based GANs with the physical disentanglement and flexibility of nonlinear 3D morphable models.
It achieves photorealistic manipulation of portrait images with fully disentangled 3D control over their physical attributes, enabling extreme manipulation of lighting, facial expression, and pose variations up to full profile view.
arXiv Detail & Related papers (2021-11-01T15:53:36Z) - HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping [116.1022638063613]
We propose HifiFace, which can preserve the face shape of the source face and generate photo-realistic results.
We introduce the Semantic Facial Fusion module to optimize the combination of encoder and decoder features.
arXiv Detail & Related papers (2021-06-18T07:39:09Z) - One Shot Face Swapping on Megapixels [65.47443090320955]
This paper proposes the first Megapixel level method for one shot Face Swapping (or MegaFS for short)
Complete face representation, stable training, and limited memory usage are the three novel contributions to the success of our method.
arXiv Detail & Related papers (2021-05-11T10:41:47Z) - Exemplar-Based 3D Portrait Stylization [23.585334925548064]
We present the first framework for one-shot 3D portrait style transfer.
It can generate 3D face models with both the geometry exaggerated and the texture stylized.
Our method achieves robustly good results on different artistic styles and outperforms existing methods.
arXiv Detail & Related papers (2021-04-29T17:59:54Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - Geometric Style Transfer [74.58782301514053]
We introduce a neural architecture that supports transfer of geometric style.
New architecture runs prior to a network that transfers texture style.
Users can input content/style pair as is common, or they can chose to input a content/texture-style/geometry-style triple.
arXiv Detail & Related papers (2020-07-10T16:33:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.