NeLF: Neural Light-transport Field for Portrait View Synthesis and
Relighting
- URL: http://arxiv.org/abs/2107.12351v1
- Date: Mon, 26 Jul 2021 17:44:52 GMT
- Title: NeLF: Neural Light-transport Field for Portrait View Synthesis and
Relighting
- Authors: Tiancheng Sun, Kai-En Lin, Sai Bi, Zexiang Xu, Ravi Ramamoorthi
- Abstract summary: We use a neural network to predict the light-transport field in 3D space, and from the predicted Neural Light-transport Field (NeLF) produce a portrait from a new camera view under a new environmental lighting.
Our method achieves simultaneous view synthesis and relighting given multi-view portraits as the input, and achieves state-of-the-art results.
- Score: 49.73715814270705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human portraits exhibit various appearances when observed from different
views under different lighting conditions. We can easily imagine how the face
will look like in another setup, but computer algorithms still fail on this
problem given limited observations. To this end, we present a system for
portrait view synthesis and relighting: given multiple portraits, we use a
neural network to predict the light-transport field in 3D space, and from the
predicted Neural Light-transport Field (NeLF) produce a portrait from a new
camera view under a new environmental lighting. Our system is trained on a
large number of synthetic models, and can generalize to different synthetic and
real portraits under various lighting conditions. Our method achieves
simultaneous view synthesis and relighting given multi-view portraits as the
input, and achieves state-of-the-art results.
Related papers
- Real-time 3D-aware Portrait Video Relighting [89.41078798641732]
We present the first real-time 3D-aware method for relighting in-the-wild videos of talking faces based on Neural Radiance Fields (NeRF)
We infer an albedo tri-plane, as well as a shading tri-plane based on a desired lighting condition for each video frame with fast dual-encoders.
Our method runs at 32.98 fps on consumer-level hardware and achieves state-of-the-art results in terms of reconstruction quality, lighting error, lighting instability, temporal consistency and inference speed.
arXiv Detail & Related papers (2024-10-24T01:34:11Z) - Cafca: High-quality Novel View Synthesis of Expressive Faces from Casual Few-shot Captures [33.463245327698]
We present a novel volumetric prior on human faces that allows for high-fidelity expressive face modeling.
We leverage a 3D Morphable Face Model to synthesize a large training set, rendering each identity with different expressions.
We then train a conditional Neural Radiance Field prior on this synthetic dataset and, at inference time, fine-tune the model on a very sparse set of real images of a single subject.
arXiv Detail & Related papers (2024-10-01T12:24:50Z) - Lite2Relight: 3D-aware Single Image Portrait Relighting [87.62069509622226]
Lite2Relight is a novel technique that can predict 3D consistent head poses of portraits.
By utilizing a pre-trained geometry-aware encoder and a feature alignment module, we map input images into a relightable 3D space.
This includes producing 3D-consistent results of the full head, including hair, eyes, and expressions.
arXiv Detail & Related papers (2024-07-15T07:16:11Z) - Semantic-aware Generation of Multi-view Portrait Drawings [16.854527555637063]
We propose a Semantic-Aware GEnerator (SAGE) for synthesizing multi-view portrait drawings.
Our motivation is that facial semantic labels are view-consistent and correlate with drawing techniques.
SAGE achieves significantly superior or highly competitive performance, compared to existing 3D-aware image synthesis methods.
arXiv Detail & Related papers (2023-05-04T07:48:27Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - NARRATE: A Normal Assisted Free-View Portrait Stylizer [42.38374601073052]
NARRATE is a novel pipeline that enables simultaneously editing portrait lighting and perspective in a photorealistic manner.
We experimentally demonstrate that NARRATE achieves more photorealistic, reliable results over prior works.
We showcase vivid free-view facial animations as well as 3D-aware relightableization, which help facilitate various AR/VR applications.
arXiv Detail & Related papers (2022-07-03T07:54:05Z) - Relightable 3D Head Portraits from a Smartphone Video [15.639140551193073]
We present a system for creating a relightable 3D portrait of a human head.
Our neural pipeline operates on a sequence of frames captured by a smartphone camera with the flash blinking.
A deep rendering network is trained to regress dense albedo, normals, and environmental lighting maps for arbitrary new viewpoints.
arXiv Detail & Related papers (2020-12-17T22:49:02Z) - Portrait Neural Radiance Fields from a Single Image [68.66958204066721]
We present a method for estimating Neural Radiance Fields (NeRF) from a single portrait.
We propose to pretrain the weights of a multilayer perceptron (MLP), which implicitly models the volumetric density.
To improve the generalization to unseen faces, we train the canonical coordinate space approximated by 3D face morphable models.
We quantitatively evaluate the method using controlled captures and demonstrate the generalization to real portrait images, showing favorable results against state-of-the-arts.
arXiv Detail & Related papers (2020-12-10T18:59:59Z) - Neural Light Transport for Relighting and View Synthesis [70.39907425114302]
Light transport (LT) of a scene describes how it appears under different lighting and viewing directions.
We propose a semi-parametric approach to learn a neural representation of LT embedded in a texture atlas of known geometric properties.
We show how to fuse previously seen observations of illuminants and views to synthesize a new image of the same scene under a desired lighting condition.
arXiv Detail & Related papers (2020-08-09T20:13:15Z) - Portrait Shadow Manipulation [37.414681268753526]
Casually-taken portrait photographs often suffer from unflattering lighting and shadowing because of suboptimal conditions in the environment.
We present a computational approach that gives casual photographers some of this control, thereby allowing poorly-lit portraits to be relit post-capture in a realistic and easily-controllable way.
Our approach relies on a pair of neural networks---one to remove foreign shadows cast by external objects, and another to soften facial shadows cast by the features of the subject and to add a synthetic fill light to improve the lighting ratio.
arXiv Detail & Related papers (2020-05-18T17:51:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.