NeRFlame: FLAME-based conditioning of NeRF for 3D face rendering
- URL: http://arxiv.org/abs/2303.06226v2
- Date: Mon, 27 Nov 2023 10:31:09 GMT
- Title: NeRFlame: FLAME-based conditioning of NeRF for 3D face rendering
- Authors: Wojciech Zaj\k{a}c, Joanna Waczy\'nska, Piotr Borycki, Jacek Tabor,
Maciej Zi\k{e}ba, Przemys{\l}aw Spurek
- Abstract summary: The present paper proposes a novel approach, named NeRFlame, which combines the strengths of both NeRF and FLAME methods.
Our approach utilizes the FLAME mesh as a distinct density volume. Consequently, color values exist only in the vicinity of the FLAME mesh.
This FLAME framework is seamlessly incorporated into the NeRF architecture for predicting RGB colors, enabling our model to explicitly represent volume density and implicitly capture RGB colors.
- Score: 10.991274404360194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional 3D face models are based on mesh representations with texture.
One of the most important models is FLAME (Faces Learned with an Articulated
Model and Expressions), which produces meshes of human faces that are fully
controllable. Unfortunately, such models have problems with capturing geometric
and appearance details. In contrast to mesh representation, the neural radiance
field (NeRF) produces extremely sharp renders. However, implicit methods are
hard to animate and do not generalize well to unseen expressions. It is not
trivial to effectively control NeRF models to obtain face manipulation.
The present paper proposes a novel approach, named NeRFlame, which combines
the strengths of both NeRF and FLAME methods. Our method enables high-quality
rendering capabilities of NeRF while also offering complete control over the
visual appearance, similar to FLAME. In contrast to traditional NeRF-based
structures that use neural networks for RGB color and volume density modeling,
our approach utilizes the FLAME mesh as a distinct density volume.
Consequently, color values exist only in the vicinity of the FLAME mesh. This
FLAME framework is seamlessly incorporated into the NeRF architecture for
predicting RGB colors, enabling our model to explicitly represent volume
density and implicitly capture RGB colors.
Related papers
- Taming Latent Diffusion Model for Neural Radiance Field Inpainting [63.297262813285265]
Neural Radiance Field (NeRF) is a representation for 3D reconstruction from multi-view images.
We propose tempering the diffusion model'sity with per-scene customization and mitigating the textural shift with masked training.
Our framework yields state-of-the-art NeRF inpainting results on various real-world scenes.
arXiv Detail & Related papers (2024-04-15T17:59:57Z) - Gaussian Splatting with NeRF-based Color and Opacity [7.121259735505479]
We propose a hybrid model Viewing Direction Gaussian Splatting (VDGS) that uses GS representation of the 3D object's shape and NeRF-based encoding of color and opacity.
Our model better describes shadows, light reflections, and the transparency of 3D objects without adding additional texture and light components.
arXiv Detail & Related papers (2023-12-21T10:52:59Z) - Dynamic Mesh-Aware Radiance Fields [75.59025151369308]
This paper designs a two-way coupling between mesh and NeRF during rendering and simulation.
We show that a hybrid system approach outperforms alternatives in visual realism for mesh insertion.
arXiv Detail & Related papers (2023-09-08T20:18:18Z) - RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models [36.236190350126826]
We propose a novel framework that can take RGB images as input and alter the 3D content in neural scenes.
Specifically, we semantically select the target object and a pre-trained diffusion model will guide the NeRF model to generate new 3D objects.
Experiment results show that our algorithm is effective for editing 3D objects in NeRF under different text prompts.
arXiv Detail & Related papers (2023-06-09T04:49:31Z) - Learning a Diffusion Prior for NeRFs [84.99454404653339]
We propose to use a diffusion model to generate NeRFs encoded on a regularized grid.
We show that our model can sample realistic NeRFs, while at the same time allowing conditional generations, given a certain observation as guidance.
arXiv Detail & Related papers (2023-04-27T19:24:21Z) - DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction
using Neural Radiance Fields [56.30120727729177]
We introduce DehazeNeRF as a framework that robustly operates in hazy conditions.
We demonstrate successful multi-view haze removal, novel view synthesis, and 3D shape reconstruction where existing approaches fail.
arXiv Detail & Related papers (2023-03-20T18:03:32Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - NeRF-Loc: Transformer-Based Object Localization Within Neural Radiance
Fields [62.89785701659139]
We propose a transformer-based framework, NeRF-Loc, to extract 3D bounding boxes of objects in NeRF scenes.
NeRF-Loc takes a pre-trained NeRF model and camera view as input and produces labeled, oriented 3D bounding boxes of objects as output.
arXiv Detail & Related papers (2022-09-24T18:34:22Z) - HeadNeRF: A Real-time NeRF-based Parametric Head Model [39.240265611700735]
HeadNeRF is a novel NeRF-based parametric head model that integrates the neural radiance field to the parametric representation of the human head.
It can render high fidelity head images in real-time, and supports directly controlling the generated images' rendering pose and various semantic attributes.
arXiv Detail & Related papers (2021-12-10T16:10:13Z) - NeRF-VAE: A Geometry Aware 3D Scene Generative Model [14.593550382914767]
We propose NeRF-VAE, a 3D scene generative model that incorporates geometric structure via NeRF and differentiable volume rendering.
NeRF-VAE's explicit 3D rendering process contrasts previous generative models with convolution-based rendering.
We show that, once trained, NeRF-VAE is able to infer and render geometrically-consistent scenes from previously unseen 3D environments.
arXiv Detail & Related papers (2021-04-01T16:16:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.