Artist-Friendly Relightable and Animatable Neural Heads
- URL: http://arxiv.org/abs/2312.03420v1
- Date: Wed, 6 Dec 2023 11:06:46 GMT
- Title: Artist-Friendly Relightable and Animatable Neural Heads
- Authors: Yingyan Xu, Prashanth Chandran, Sebastian Weiss, Markus Gross, Gaspard
Zoss, Derek Bradley
- Abstract summary: A common approach for creating photo-realistic digital avatars is through the use of volumetric neural fields.
Recent variants also surpassed the usual drawback of baked-in illumination in neural representations, showing that static neural avatars can be relit in any environment.
Our method builds on a proven dynamic avatar approach based on a mixture of volumetric primitives, combined with a recently-proposed lightweight hardware setup for relightable neural fields.
- Score: 15.803111500220888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An increasingly common approach for creating photo-realistic digital avatars
is through the use of volumetric neural fields. The original neural radiance
field (NeRF) allowed for impressive novel view synthesis of static heads when
trained on a set of multi-view images, and follow up methods showed that these
neural representations can be extended to dynamic avatars. Recently, new
variants also surpassed the usual drawback of baked-in illumination in neural
representations, showing that static neural avatars can be relit in any
environment. In this work we simultaneously tackle both the motion and
illumination problem, proposing a new method for relightable and animatable
neural heads. Our method builds on a proven dynamic avatar approach based on a
mixture of volumetric primitives, combined with a recently-proposed lightweight
hardware setup for relightable neural fields, and includes a novel architecture
that allows relighting dynamic neural avatars performing unseen expressions in
any environment, even with nearfield illumination and viewpoints.
Related papers
- BecomingLit: Relightable Gaussian Avatars with Hybrid Neural Shading [3.447848701446988]
We introduce BecomingLit, a novel method for reconstructing relightable, high-resolution head avatars that can be rendered from novel viewpoints at interactive rates.<n>We collect a novel dataset consisting of diverse multi-view sequences of numerous subjects under varying illumination conditions.<n>We propose a new hybrid neural shading approach, combining a neural diffuse BRDF with an analytical specular term.
arXiv Detail & Related papers (2025-06-06T17:53:58Z) - Relightable Neural Actor with Intrinsic Decomposition and Pose Control [80.06094206522668]
We propose Relightable Neural Actor, a new video-based method for learning a pose-driven neural human model that can be relighted.
For training, our method solely requires a multi-view recording of the human under a known, but static lighting condition.
To evaluate our approach in real-world scenarios, we collect a new dataset with four identities recorded under different light conditions, indoors and outdoors.
arXiv Detail & Related papers (2023-12-18T14:30:13Z) - BakedAvatar: Baking Neural Fields for Real-Time Head Avatar Synthesis [7.485318043174123]
We introduce BakedAvatar, a novel representation for real-time neural head avatar.
Our approach extracts layered meshes from learned isosurfaces of the head and computes expression-, pose-, and view-dependent appearances.
Experimental results demonstrate that our representation generates photorealistic results of comparable quality to other state-the-art methods.
arXiv Detail & Related papers (2023-11-09T17:05:53Z) - Neural Point-based Volumetric Avatar: Surface-guided Neural Points for
Efficient and Photorealistic Volumetric Head Avatar [62.87222308616711]
We propose fullname (name), a method that adopts the neural point representation and the neural volume rendering process.
Specifically, the neural points are strategically constrained around the surface of the target expression via a high-resolution UV displacement map.
By design, our name is better equipped to handle topologically changing regions and thin structures while also ensuring accurate expression control when animating avatars.
arXiv Detail & Related papers (2023-07-11T03:40:10Z) - Neural Radiance Fields (NeRFs): A Review and Some Recent Developments [0.0]
Neural Radiance Field (NeRF) is a framework that represents a 3D scene in the weights of a fully connected neural network.
NeRFs have become a popular field of research as recent developments have been made that expand the performance and capabilities of the base framework.
arXiv Detail & Related papers (2023-04-30T03:23:58Z) - MegaPortraits: One-shot Megapixel Neural Head Avatars [7.05068904295608]
We propose a set of new neural architectures and training methods that can leverage both medium-resolution video data and high-resolution image data.
We show how a trained high-resolution neural avatar model can be distilled into a lightweight student model which runs in real-time.
Real-time operation and identity lock are essential for many practical applications head avatar systems.
arXiv Detail & Related papers (2022-07-15T17:32:37Z) - KiloNeuS: Implicit Neural Representations with Real-Time Global
Illumination [1.5749416770494706]
We present KiloNeuS, a new neural object representation that can be rendered in path-traced scenes at interactive frame rates.
KiloNeuS enables the simulation of realistic light interactions between neural and classic primitives in shared scenes.
arXiv Detail & Related papers (2022-06-22T07:33:26Z) - Neural Rays for Occlusion-aware Image-based Rendering [108.34004858785896]
We present a new neural representation, called Neural Ray (NeuRay), for the novel view synthesis (NVS) task with multi-view images as input.
NeuRay can quickly generate high-quality novel view rendering images of unseen scenes with little finetuning.
arXiv Detail & Related papers (2021-07-28T15:09:40Z) - Fast Training of Neural Lumigraph Representations using Meta Learning [109.92233234681319]
We develop a new neural rendering approach with the goal of quickly learning a high-quality representation which can also be rendered in real-time.
Our approach, MetaNLR++, accomplishes this by using a unique combination of a neural shape representation and 2D CNN-based image feature extraction, aggregation, and re-projection.
We show that MetaNLR++ achieves similar or better photorealistic novel view synthesis results in a fraction of the time that competing methods require.
arXiv Detail & Related papers (2021-06-28T18:55:50Z) - Animatable Neural Radiance Fields from Monocular RGB Video [72.6101766407013]
We present animatable neural radiance fields for detailed human avatar creation from monocular videos.
Our approach extends neural radiance fields to the dynamic scenes with human movements via introducing explicit pose-guided deformation.
In experiments we show that the proposed approach achieves 1) implicit human geometry and appearance reconstruction with high-quality details, 2) photo-realistic rendering of the human from arbitrary views, and 3) animation of the human with arbitrary poses.
arXiv Detail & Related papers (2021-06-25T13:32:23Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar
Reconstruction [9.747648609960185]
We present dynamic neural radiance fields for modeling the appearance and dynamics of a human face.
Especially, for telepresence applications in AR or VR, a faithful reproduction of the appearance including novel viewpoints or head-poses is required.
arXiv Detail & Related papers (2020-12-05T16:01:16Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.