Relightable 3D Head Portraits from a Smartphone Video
- URL: http://arxiv.org/abs/2012.09963v1
- Date: Thu, 17 Dec 2020 22:49:02 GMT
- Title: Relightable 3D Head Portraits from a Smartphone Video
- Authors: Artem Sevastopolsky, Savva Ignatiev, Gonzalo Ferrer, Evgeny Burnaev,
Victor Lempitsky
- Abstract summary: We present a system for creating a relightable 3D portrait of a human head.
Our neural pipeline operates on a sequence of frames captured by a smartphone camera with the flash blinking.
A deep rendering network is trained to regress dense albedo, normals, and environmental lighting maps for arbitrary new viewpoints.
- Score: 15.639140551193073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, a system for creating a relightable 3D portrait of a human head
is presented. Our neural pipeline operates on a sequence of frames captured by
a smartphone camera with the flash blinking (flash-no flash sequence). A coarse
point cloud reconstructed via structure-from-motion software and multi-view
denoising is then used as a geometric proxy. Afterwards, a deep rendering
network is trained to regress dense albedo, normals, and environmental lighting
maps for arbitrary new viewpoints. Effectively, the proxy geometry and the
rendering network constitute a relightable 3D portrait model, that can be
synthesized from an arbitrary viewpoint and under arbitrary lighting, e.g.
directional light, point light, or an environment map. The model is fitted to
the sequence of frames with human face-specific priors that enforce the
plausibility of albedo-lighting decomposition and operates at the interactive
frame rate. We evaluate the performance of the method under varying lighting
conditions and at the extrapolated viewpoints and compare with existing
relighting methods.
Related papers
- Real-time 3D-aware Portrait Video Relighting [89.41078798641732]
We present the first real-time 3D-aware method for relighting in-the-wild videos of talking faces based on Neural Radiance Fields (NeRF)
We infer an albedo tri-plane, as well as a shading tri-plane based on a desired lighting condition for each video frame with fast dual-encoders.
Our method runs at 32.98 fps on consumer-level hardware and achieves state-of-the-art results in terms of reconstruction quality, lighting error, lighting instability, temporal consistency and inference speed.
arXiv Detail & Related papers (2024-10-24T01:34:11Z) - IllumiNeRF: 3D Relighting Without Inverse Rendering [25.642960820693947]
We show how to relight each input image using an image diffusion model conditioned on target environment lighting and estimated object geometry.
We reconstruct a Neural Radiance Field (NeRF) with these relit images, from which we render novel views under the target lighting.
We demonstrate that this strategy is surprisingly competitive and achieves state-of-the-art results on multiple relighting benchmarks.
arXiv Detail & Related papers (2024-06-10T17:59:59Z) - Relightable 3D Gaussians: Realistic Point Cloud Relighting with BRDF Decomposition and Ray Tracing [21.498078188364566]
We present a novel differentiable point-based rendering framework to achieve photo-realistic relighting.
The proposed framework showcases the potential to revolutionize the mesh-based graphics pipeline with a point-based pipeline enabling editing, tracing, and relighting.
arXiv Detail & Related papers (2023-11-27T18:07:58Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - NARRATE: A Normal Assisted Free-View Portrait Stylizer [42.38374601073052]
NARRATE is a novel pipeline that enables simultaneously editing portrait lighting and perspective in a photorealistic manner.
We experimentally demonstrate that NARRATE achieves more photorealistic, reliable results over prior works.
We showcase vivid free-view facial animations as well as 3D-aware relightableization, which help facilitate various AR/VR applications.
arXiv Detail & Related papers (2022-07-03T07:54:05Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Deep Portrait Lighting Enhancement with 3D Guidance [24.01582513386902]
We present a novel deep learning framework for portrait lighting enhancement based on 3D facial guidance.
Experimental results on the FFHQ dataset and in-the-wild images show that the proposed method outperforms state-of-the-art methods in terms of both quantitative metrics and visual quality.
arXiv Detail & Related papers (2021-08-04T15:49:09Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.