GazeNeRF: 3D-Aware Gaze Redirection with Neural Radiance Fields
- URL: http://arxiv.org/abs/2212.04823v2
- Date: Tue, 28 Mar 2023 19:41:57 GMT
- Title: GazeNeRF: 3D-Aware Gaze Redirection with Neural Radiance Fields
- Authors: Alessandro Ruzzi, Xiangwei Shi, Xi Wang, Gengyan Li, Shalini De Mello,
Hyung Jin Chang, Xucong Zhang, Otmar Hilliges
- Abstract summary: Existing gaze redirection methods operate on 2D images and struggle to generate 3D consistent results.
We build on the intuition that the face region and eyeballs are separate 3D structures that move in a coordinated yet independent fashion.
- Score: 100.53114092627577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose GazeNeRF, a 3D-aware method for the task of gaze redirection.
Existing gaze redirection methods operate on 2D images and struggle to generate
3D consistent results. Instead, we build on the intuition that the face region
and eyeballs are separate 3D structures that move in a coordinated yet
independent fashion. Our method leverages recent advancements in conditional
image-based neural radiance fields and proposes a two-stream architecture that
predicts volumetric features for the face and eye regions separately. Rigidly
transforming the eye features via a 3D rotation matrix provides fine-grained
control over the desired gaze angle. The final, redirected image is then
attained via differentiable volume compositing. Our experiments show that this
architecture outperforms naively conditioned NeRF baselines as well as previous
state-of-the-art 2D gaze redirection methods in terms of redirection accuracy
and identity preservation.
Related papers
- GeoGS3D: Single-view 3D Reconstruction via Geometric-aware Diffusion Model and Gaussian Splatting [81.03553265684184]
We introduce GeoGS3D, a framework for reconstructing detailed 3D objects from single-view images.
We propose a novel metric, Gaussian Divergence Significance (GDS), to prune unnecessary operations during optimization.
Experiments demonstrate that GeoGS3D generates images with high consistency across views and reconstructs high-quality 3D objects.
arXiv Detail & Related papers (2024-03-15T12:24:36Z) - Text2Control3D: Controllable 3D Avatar Generation in Neural Radiance
Fields using Geometry-Guided Text-to-Image Diffusion Model [39.64952340472541]
We propose a controllable text-to-3D avatar generation method whose facial expression is controllable.
Our main strategy is to construct the 3D avatar in Neural Radiance Fields (NeRF) optimized with a set of controlled viewpoint-aware images.
We demonstrate the empirical results and discuss the effectiveness of our method.
arXiv Detail & Related papers (2023-09-07T08:14:46Z) - Magic123: One Image to High-Quality 3D Object Generation Using Both 2D
and 3D Diffusion Priors [104.79392615848109]
We present Magic123, a two-stage coarse-to-fine approach for high-quality, textured 3D meshes from a single unposed image.
In the first stage, we optimize a neural radiance field to produce a coarse geometry.
In the second stage, we adopt a memory-efficient differentiable mesh representation to yield a high-resolution mesh with a visually appealing texture.
arXiv Detail & Related papers (2023-06-30T17:59:08Z) - Accurate Gaze Estimation using an Active-gaze Morphable Model [9.192482716410511]
Rather than regressing gaze direction directly from images, we show that adding a 3D shape model can improve gaze estimation accuracy.
We equip this with a geometric vergence model of gaze to give an active-gaze 3DMM'
Our method can learn with only the ground truth gaze target point and the camera parameters, without access to the ground truth gaze origin points.
arXiv Detail & Related papers (2023-01-30T18:51:14Z) - 3D GAN Inversion with Facial Symmetry Prior [42.22071135018402]
It is natural to associate 3D GANs with GAN inversion methods to project a real image into the generator's latent space.
We propose a novel method to promote 3D GAN inversion by introducing facial symmetry prior.
arXiv Detail & Related papers (2022-11-30T11:57:45Z) - Controllable Radiance Fields for Dynamic Face Synthesis [125.48602100893845]
We study how to explicitly control generative model synthesis of face dynamics exhibiting non-rigid motion.
Controllable Radiance Field (CoRF)
On head image/video data we show that CoRFs are 3D-aware while enabling editing of identity, viewing directions, and motion.
arXiv Detail & Related papers (2022-10-11T23:17:31Z) - 2D GANs Meet Unsupervised Single-view 3D Reconstruction [21.93671761497348]
controllable image generation based on pre-trained GANs can benefit a wide range of computer vision tasks.
We propose a novel image-conditioned neural implicit field, which can leverage 2D supervisions from GAN-generated multi-view images.
The effectiveness of our approach is demonstrated through superior single-view 3D reconstruction results of generic objects.
arXiv Detail & Related papers (2022-07-20T20:24:07Z) - GAN2X: Non-Lambertian Inverse Rendering of Image GANs [85.76426471872855]
We present GAN2X, a new method for unsupervised inverse rendering that only uses unpaired images for training.
Unlike previous Shape-from-GAN approaches that mainly focus on 3D shapes, we take the first attempt to also recover non-Lambertian material properties by exploiting the pseudo paired data generated by a GAN.
Experiments demonstrate that GAN2X can accurately decompose 2D images to 3D shape, albedo, and specular properties for different object categories, and achieves the state-of-the-art performance for unsupervised single-view 3D face reconstruction.
arXiv Detail & Related papers (2022-06-18T16:58:49Z) - Solving Inverse Problems with NerfGANs [88.24518907451868]
We introduce a novel framework for solving inverse problems using NeRF-style generative models.
We show that naively optimizing the latent space leads to artifacts and poor novel view rendering.
We propose a novel radiance field regularization method to obtain better 3-D surfaces and improved novel views given single view observations.
arXiv Detail & Related papers (2021-12-16T17:56:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.