3D Face Style Transfer with a Hybrid Solution of NeRF and Mesh
Rasterization
- URL: http://arxiv.org/abs/2311.13168v1
- Date: Wed, 22 Nov 2023 05:24:35 GMT
- Title: 3D Face Style Transfer with a Hybrid Solution of NeRF and Mesh
Rasterization
- Authors: Jianwei Feng and Prateek Singhal
- Abstract summary: We propose to use a neural radiance field (NeRF) to represent 3D human face and combine it with 2D style transfer to stylize the 3D face.
We find that directly training a NeRF on stylized images from 2D style transfer brings in 3D inconsistency issue and causes blurriness.
We propose a hybrid framework of NeRF and meshization to combine the benefits of high fidelity geometry reconstruction of NeRF and fast speed rendering of mesh.
- Score: 4.668492532161309
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Style transfer for human face has been widely researched in recent years.
Majority of the existing approaches work in 2D image domain and have 3D
inconsistency issue when applied on different viewpoints of the same face. In
this paper, we tackle the problem of 3D face style transfer which aims at
generating stylized novel views of a 3D human face with multi-view consistency.
We propose to use a neural radiance field (NeRF) to represent 3D human face and
combine it with 2D style transfer to stylize the 3D face. We find that directly
training a NeRF on stylized images from 2D style transfer brings in 3D
inconsistency issue and causes blurriness. On the other hand, training a NeRF
jointly with 2D style transfer objectives shows poor convergence due to the
identity and head pose gap between style image and content image. It also poses
challenge in training time and memory due to the need of volume rendering for
full image to apply style transfer loss functions. We therefore propose a
hybrid framework of NeRF and mesh rasterization to combine the benefits of high
fidelity geometry reconstruction of NeRF and fast rendering speed of mesh. Our
framework consists of three stages: 1. Training a NeRF model on input face
images to learn the 3D geometry; 2. Extracting a mesh from the trained NeRF
model and optimizing it with style transfer objectives via differentiable
rasterization; 3. Training a new color network in NeRF conditioned on a style
embedding to enable arbitrary style transfer to the 3D face. Experiment results
show that our approach generates high quality face style transfer with great 3D
consistency, while also enabling a flexible style control.
Related papers
- Style-NeRF2NeRF: 3D Style Transfer From Style-Aligned Multi-View Images [54.56070204172398]
We propose a simple yet effective pipeline for stylizing a 3D scene.
We perform 3D style transfer by refining the source NeRF model using stylized images generated by a style-aligned image-to-image diffusion model.
We demonstrate that our method can transfer diverse artistic styles to real-world 3D scenes with competitive quality.
arXiv Detail & Related papers (2024-06-19T09:36:18Z) - ArtNeRF: A Stylized Neural Field for 3D-Aware Cartoonized Face Synthesis [11.463969116010183]
ArtNeRF is a novel face stylization framework derived from 3D-aware GAN.
We propose an expressive generator to synthesize stylized faces and a triple-branch discriminator module to improve style consistency.
Experiments demonstrate that ArtNeRF is versatile in generating high-quality 3D-aware cartoon faces with arbitrary styles.
arXiv Detail & Related papers (2024-04-21T16:45:35Z) - NeRF Analogies: Example-Based Visual Attribute Transfer for NeRFs [19.17715832282524]
A Neural Radiance Field (NeRF) encodes the specific relation of 3D geometry and appearance of a scene.
We generalize classic image analogies from 2D images to NeRFs.
Our method allows exploring the mix-and-match product space of 3D geometry and appearance.
arXiv Detail & Related papers (2024-02-13T17:47:42Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - StyleRF: Zero-shot 3D Style Transfer of Neural Radiance Fields [52.19291190355375]
StyleRF (Style Radiance Fields) is an innovative 3D style transfer technique.
It employs an explicit grid of high-level features to represent 3D scenes, with which high-fidelity geometry can be reliably restored via volume rendering.
It transforms the grid features according to the reference style which directly leads to high-quality zero-shot style transfer.
arXiv Detail & Related papers (2023-03-19T08:26:06Z) - 3D Face Arbitrary Style Transfer [18.09280257466941]
We propose a novel method, namely Face-guided Dual Style Transfer (FDST)
FDST employs a 3D decoupling module to separate facial geometry and texture.
We show that FDST can be applied in many downstream tasks, including region-controllable style transfer, high-fidelity face texture reconstruction, and artistic face reconstruction.
arXiv Detail & Related papers (2023-03-14T08:51:51Z) - StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D
Mutual Learning [50.65015652968839]
3D scene stylization aims at generating stylized images of the scene from arbitrary novel views.
Thanks to recently proposed neural radiance fields (NeRF), we are able to represent a 3D scene in a consistent way.
We propose a novel mutual learning framework for 3D scene stylization that combines a 2D image stylization network and NeRF.
arXiv Detail & Related papers (2022-05-24T16:29:50Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image
Synthesis [92.25145204543904]
StyleNeRF is a 3D-aware generative model for high-resolution image synthesis with high multi-view consistency.
It integrates the neural radiance field (NeRF) into a style-based generator.
It can synthesize high-resolution images at interactive rates while preserving 3D consistency at high quality.
arXiv Detail & Related papers (2021-10-18T02:37:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.