EyeNeRF: A Hybrid Representation for Photorealistic Synthesis, Animation
and Relighting of Human Eyes
- URL: http://arxiv.org/abs/2206.08428v1
- Date: Thu, 16 Jun 2022 20:05:04 GMT
- Title: EyeNeRF: A Hybrid Representation for Photorealistic Synthesis, Animation
and Relighting of Human Eyes
- Authors: Gengyan Li (1 and 2), Abhimitra Meka (1), Franziska M\"uller (1),
Marcel C. B\"uhler (2), Otmar Hilliges (2) ((1) Google Inc., (2) ETH
Z\"urich)
- Abstract summary: We present a novel geometry and appearance representation that enables high-fidelity capture and animation, view synthesis and relighting of the eye region using only a sparse set of lights and cameras.
We show that for high-resolution close-ups of the eye, our model can synthesize high-fidelity animated gaze from novel views under unseen illumination conditions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A unique challenge in creating high-quality animatable and relightable 3D
avatars of people is modeling human eyes. The challenge of synthesizing eyes is
multifold as it requires 1) appropriate representations for the various
components of the eye and the periocular region for coherent viewpoint
synthesis, capable of representing diffuse, refractive and highly reflective
surfaces, 2) disentangling skin and eye appearance from environmental
illumination such that it may be rendered under novel lighting conditions, and
3) capturing eyeball motion and the deformation of the surrounding skin to
enable re-gazing. These challenges have traditionally necessitated the use of
expensive and cumbersome capture setups to obtain high-quality results, and
even then, modeling of the eye region holistically has remained elusive. We
present a novel geometry and appearance representation that enables
high-fidelity capture and photorealistic animation, view synthesis and
relighting of the eye region using only a sparse set of lights and cameras. Our
hybrid representation combines an explicit parametric surface model for the
eyeball with implicit deformable volumetric representations for the periocular
region and the interior of the eye. This novel hybrid model has been designed
to address the various parts of that challenging facial area - the explicit
eyeball surface allows modeling refraction and high-frequency specular
reflection at the cornea, whereas the implicit representation is well suited to
model lower-frequency skin reflection via spherical harmonics and can represent
non-surface structures such as hair or diffuse volumetric bodies, both of which
are a challenge for explicit surface models. We show that for high-resolution
close-ups of the eye, our model can synthesize high-fidelity animated gaze from
novel views under unseen illumination conditions.
Related papers
- Relightable Neural Actor with Intrinsic Decomposition and Pose Control [80.06094206522668]
We propose Relightable Neural Actor, a new video-based method for learning a pose-driven neural human model that can be relighted.
For training, our method solely requires a multi-view recording of the human under a known, but static lighting condition.
To evaluate our approach in real-world scenarios, we collect a new dataset with four identities recorded under different light conditions, indoors and outdoors.
arXiv Detail & Related papers (2023-12-18T14:30:13Z) - Relightable Gaussian Codec Avatars [26.255161061306428]
We present Relightable Gaussian Codec Avatars, a method to build high-fidelity relightable head avatars that can be animated to generate novel expressions.
Our geometry model based on 3D Gaussians can capture 3D-consistent sub-millimeter details such as hair strands and pores on dynamic face sequences.
We improve the fidelity of eye reflections and enable explicit gaze control by introducing relightable explicit eye models.
arXiv Detail & Related papers (2023-12-06T18:59:58Z) - Ghost on the Shell: An Expressive Representation of General 3D Shapes [97.76840585617907]
Meshes are appealing since they enable fast physics-based rendering with realistic material and lighting.
Recent work on reconstructing and statistically modeling 3D shapes has critiqued meshes as being topologically inflexible.
We parameterize open surfaces by defining a manifold signed distance field on watertight surfaces.
G-Shell achieves state-of-the-art performance on non-watertight mesh reconstruction and generation tasks.
arXiv Detail & Related papers (2023-10-23T17:59:52Z) - High-Fidelity Eye Animatable Neural Radiance Fields for Human Face [22.894881396543926]
We learn a face NeRF model that is sensitive to eye movements from multi-view images.
We show that our model is capable of generating high-fidelity images with accurate eyeball rotation and non-rigid periocular deformation.
arXiv Detail & Related papers (2023-08-01T18:26:55Z) - MEGANE: Morphable Eyeglass and Avatar Network [83.65790119755053]
We propose a 3D compositional morphable model of eyeglasses.
We employ a hybrid representation that combines surface geometry and a volumetric representation.
Our approach models global light transport effects, such as casting shadows between faces and glasses.
arXiv Detail & Related papers (2023-02-09T18:59:49Z) - 3DMM-RF: Convolutional Radiance Fields for 3D Face Modeling [111.98096975078158]
We introduce a style-based generative network that synthesizes in one pass all and only the required rendering samples of a neural radiance field.
We show that this model can accurately be fit to "in-the-wild" facial images of arbitrary pose and illumination, extract the facial characteristics, and be used to re-render the face in controllable conditions.
arXiv Detail & Related papers (2022-09-15T15:28:45Z) - Drivable Volumetric Avatars using Texel-Aligned Features [52.89305658071045]
Photo telepresence requires both high-fidelity body modeling and faithful driving to enable dynamically synthesized appearance.
We propose an end-to-end framework that addresses two core challenges in modeling and driving full-body avatars of real people.
arXiv Detail & Related papers (2022-07-20T09:28:16Z) - Exposing GAN-generated Faces Using Inconsistent Corneal Specular
Highlights [42.83346543247565]
We show that GAN synthesized faces can be exposed with the inconsistent corneal specular highlights between two eyes.
The inconsistency is caused by the lack of physical/physiological constraints in the GAN models.
arXiv Detail & Related papers (2020-09-24T19:43:16Z) - Learning Implicit Surface Light Fields [34.89812112073539]
Implicit representations of 3D objects have recently achieved impressive results on learning-based 3D reconstruction tasks.
We propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field.
Our model is able to infer rich visual appearance including shadows and specular reflections.
arXiv Detail & Related papers (2020-03-27T13:17:45Z) - Seeing the World in a Bag of Chips [73.561388215585]
We address the dual problems of novel view synthesis and environment reconstruction from hand-held RGBD sensors.
Our contributions include 1) modeling highly specular objects, 2) modeling inter-reflections and Fresnel effects, and 3) enabling surface light field reconstruction with the same input needed to reconstruct shape alone.
arXiv Detail & Related papers (2020-01-14T06:44:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.