StyleMorpheus: A Style-Based 3D-Aware Morphable Face Model
- URL: http://arxiv.org/abs/2503.11792v1
- Date: Fri, 14 Mar 2025 18:32:02 GMT
- Title: StyleMorpheus: A Style-Based 3D-Aware Morphable Face Model
- Authors: Peizhi Yan, Rabab K. Ward, Dan Wang, Qiang Tang, Shan Du,
- Abstract summary: "StyleMorpheus" is the first style-based neural 3D Morphable Face Model that is trained on in-the-wild images.<n>We fine-tune the decoder through style-based generative adversarial learning to achieve photorealistic 3D rendering quality.<n>Our model achieves real-time rendering speed, allowing its use in virtual reality applications.
- Score: 11.19751552728812
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For 3D face modeling, the recently developed 3D-aware neural rendering methods are able to render photorealistic face images with arbitrary viewing directions. The training of the parametric controllable 3D-aware face models, however, still relies on a large-scale dataset that is lab-collected. To address this issue, this paper introduces "StyleMorpheus", the first style-based neural 3D Morphable Face Model (3DMM) that is trained on in-the-wild images. It inherits 3DMM's disentangled controllability (over face identity, expression, and appearance) but without the need for accurately reconstructed explicit 3D shapes. StyleMorpheus employs an auto-encoder structure. The encoder aims at learning a representative disentangled parametric code space and the decoder improves the disentanglement using shape and appearance-related style codes in the different sub-modules of the network. Furthermore, we fine-tune the decoder through style-based generative adversarial learning to achieve photorealistic 3D rendering quality. The proposed style-based design enables StyleMorpheus to achieve state-of-the-art 3D-aware face reconstruction results, while also allowing disentangled control of the reconstructed face. Our model achieves real-time rendering speed, allowing its use in virtual reality applications. We also demonstrate the capability of the proposed style-based design in face editing applications such as style mixing and color editing. Project homepage: https://github.com/ubc-3d-vision-lab/StyleMorpheus.
Related papers
- Generating Editable Head Avatars with 3D Gaussian GANs [57.51487984425395]
Traditional 3D-aware generative adversarial networks (GANs) achieve photorealistic and view-consistent 3D head synthesis.
We propose a novel approach that enhances the editability and animation control of 3D head avatars by incorporating 3D Gaussian Splatting (3DGS) as an explicit 3D representation.
Our approach delivers high-quality 3D-aware synthesis with state-of-the-art controllability.
arXiv Detail & Related papers (2024-12-26T10:10:03Z) - LeGO: Leveraging a Surface Deformation Network for Animatable Stylized Face Generation with One Example [5.999050119438177]
We propose a method that can produce a highly stylized 3D face model with desired topology.
Our methods train a surface deformation network with 3DMM and translate its domain to the target style using a differentiable meshes and directional CLIP losses.
The network achieves stylization of the 3D face mesh by mimicking the style of the target using a differentiable meshes and directional CLIP losses.
arXiv Detail & Related papers (2024-03-22T14:20:54Z) - 3DStyle-Diffusion: Pursuing Fine-grained Text-driven 3D Stylization with
2D Diffusion Models [102.75875255071246]
3D content creation via text-driven stylization has played a fundamental challenge to multimedia and graphics community.
We propose a new 3DStyle-Diffusion model that triggers fine-grained stylization of 3D meshes with additional controllable appearance and geometric guidance from 2D Diffusion models.
arXiv Detail & Related papers (2023-11-09T15:51:27Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion [115.82306502822412]
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing.
A corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing.
We study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures.
arXiv Detail & Related papers (2022-12-14T18:49:50Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - StyleRig: Rigging StyleGAN for 3D Control over Portrait Images [81.43265493604302]
StyleGAN generates portrait images of faces with eyes, teeth, hair and context (neck, shoulders, background)
StyleGAN lacks a rig-like control over semantic face parameters that are interpretable in 3D, such as face pose, expressions, and scene illumination.
We present the first method to provide a face rig-like control over a pretrained and fixed StyleGAN via a 3DMM.
arXiv Detail & Related papers (2020-03-31T21:20:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.