Exemplar-Based 3D Portrait Stylization
- URL: http://arxiv.org/abs/2104.14559v1
- Date: Thu, 29 Apr 2021 17:59:54 GMT
- Title: Exemplar-Based 3D Portrait Stylization
- Authors: Fangzhou Han, Shuquan Ye, Mingming He, Menglei Chai and Jing Liao
- Abstract summary: We present the first framework for one-shot 3D portrait style transfer.
It can generate 3D face models with both the geometry exaggerated and the texture stylized.
Our method achieves robustly good results on different artistic styles and outperforms existing methods.
- Score: 23.585334925548064
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Exemplar-based portrait stylization is widely attractive and highly desired.
Despite recent successes, it remains challenging, especially when considering
both texture and geometric styles. In this paper, we present the first
framework for one-shot 3D portrait style transfer, which can generate 3D face
models with both the geometry exaggerated and the texture stylized while
preserving the identity from the original content. It requires only one
arbitrary style image instead of a large set of training examples for a
particular style, provides geometry and texture outputs that are fully
parameterized and disentangled, and enables further graphics applications with
the 3D representations. The framework consists of two stages. In the first
geometric style transfer stage, we use facial landmark translation to capture
the coarse geometry style and guide the deformation of the dense 3D face
geometry. In the second texture style transfer stage, we focus on performing
style transfer on the canonical texture by adopting a differentiable renderer
to optimize the texture in a multi-view framework. Experiments show that our
method achieves robustly good results on different artistic styles and
outperforms existing methods. We also demonstrate the advantages of our method
via various 2D and 3D graphics applications. Project page is
https://halfjoe.github.io/projs/3DPS/index.html.
Related papers
- LeGO: Leveraging a Surface Deformation Network for Animatable Stylized Face Generation with One Example [5.999050119438177]
We propose a method that can produce a highly stylized 3D face model with desired topology.
Our methods train a surface deformation network with 3DMM and translate its domain to the target style using a differentiable meshes and directional CLIP losses.
The network achieves stylization of the 3D face mesh by mimicking the style of the target using a differentiable meshes and directional CLIP losses.
arXiv Detail & Related papers (2024-03-22T14:20:54Z) - DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields [96.0858117473902]
3D toonification involves transferring the style of an artistic domain onto a target 3D face with stylized geometry and texture.
We propose DeformToon3D, an effective toonification framework tailored for hierarchical 3D GAN.
Our approach decomposes 3D toonification into subproblems of geometry and texture stylization to better preserve the original latent space.
arXiv Detail & Related papers (2023-09-08T16:17:45Z) - Guide3D: Create 3D Avatars from Text and Image Guidance [55.71306021041785]
Guide3D is a text-and-image-guided generative model for 3D avatar generation based on diffusion models.
Our framework produces topologically and structurally correct geometry and high-resolution textures.
arXiv Detail & Related papers (2023-08-18T17:55:47Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - 3D Face Arbitrary Style Transfer [18.09280257466941]
We propose a novel method, namely Face-guided Dual Style Transfer (FDST)
FDST employs a 3D decoupling module to separate facial geometry and texture.
We show that FDST can be applied in many downstream tasks, including region-controllable style transfer, high-fidelity face texture reconstruction, and artistic face reconstruction.
arXiv Detail & Related papers (2023-03-14T08:51:51Z) - 3DAvatarGAN: Bridging Domains for Personalized Editable Avatars [75.31960120109106]
3D-GANs synthesize geometry and texture by training on large-scale datasets with a consistent structure.
We propose an adaptation framework, where the source domain is a pre-trained 3D-GAN, while the target domain is a 2D-GAN trained on artistic datasets.
We show a deformation-based technique for modeling exaggerated geometry of artistic domains, enabling -- as a byproduct -- personalized geometric editing.
arXiv Detail & Related papers (2023-01-06T19:58:47Z) - StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions [11.153966202832933]
We apply style transfer on mesh reconstructions of indoor scenes.
This enables VR applications like experiencing 3D environments painted in the style of a favorite artist.
arXiv Detail & Related papers (2021-12-02T18:59:59Z) - 3DStyleNet: Creating 3D Shapes with Geometric and Texture Style
Variations [81.45521258652734]
We propose a method to create plausible geometric and texture style variations of 3D objects.
Our method can create many novel stylized shapes, resulting in effortless 3D content creation and style-ware data augmentation.
arXiv Detail & Related papers (2021-08-30T02:28:31Z) - 3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer [66.48720190245616]
We propose a learning-based approach for style transfer between 3D objects.
The proposed method can synthesize new 3D shapes both in the form of point clouds and meshes.
We extend our technique to implicitly learn the multimodal style distribution of the chosen domains.
arXiv Detail & Related papers (2020-11-26T16:59:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.