Learning Illumination from Diverse Portraits
- URL: http://arxiv.org/abs/2008.02396v1
- Date: Wed, 5 Aug 2020 23:41:23 GMT
- Title: Learning Illumination from Diverse Portraits
- Authors: Chloe LeGendre, Wan-Chun Ma, Rohit Pandey, Sean Fanello, Christoph
Rhemann, Jason Dourgarian, Jay Busch, Paul Debevec
- Abstract summary: We train our model using portrait photos paired with their ground truth environmental illumination.
We generate a rich set of such photos by using a light stage to record the reflectance field and alpha matte of 70 diverse subjects.
We show that our technique outperforms the state-of-the-art technique for portrait-based lighting estimation.
- Score: 8.90355885907736
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a learning-based technique for estimating high dynamic range
(HDR), omnidirectional illumination from a single low dynamic range (LDR)
portrait image captured under arbitrary indoor or outdoor lighting conditions.
We train our model using portrait photos paired with their ground truth
environmental illumination. We generate a rich set of such photos by using a
light stage to record the reflectance field and alpha matte of 70 diverse
subjects in various expressions. We then relight the subjects using image-based
relighting with a database of one million HDR lighting environments,
compositing the relit subjects onto paired high-resolution background imagery
recorded during the lighting acquisition. We train the lighting estimation
model using rendering-based loss functions and add a multi-scale adversarial
loss to estimate plausible high frequency lighting detail. We show that our
technique outperforms the state-of-the-art technique for portrait-based
lighting estimation, and we also show that our method reliably handles the
inherent ambiguity between overall lighting strength and surface albedo,
recovering a similar scale of illumination for subjects with diverse skin
tones. We demonstrate that our method allows virtual objects and digital
characters to be added to a portrait photograph with consistent illumination.
Our lighting inference runs in real-time on a smartphone, enabling realistic
rendering and compositing of virtual objects into live video for augmented
reality applications.
Related papers
- DifFRelight: Diffusion-Based Facial Performance Relighting [12.909429637057343]
We present a novel framework for free-viewpoint facial performance relighting using diffusion-based image-to-image translation.
We train a diffusion model for precise lighting control, enabling high-fidelity relit facial images from flat-lit inputs.
The model accurately reproduces complex lighting effects like eye reflections, subsurface scattering, self-shadowing, and translucency.
arXiv Detail & Related papers (2024-10-10T17:56:44Z) - Controllable Light Diffusion for Portraits [8.931046902694984]
We introduce light diffusion, a novel method to improve lighting in portraits.
Inspired by professional photographers' diffusers and scrims, our method softens lighting given only a single portrait photo.
arXiv Detail & Related papers (2023-05-08T14:46:28Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - EverLight: Indoor-Outdoor Editable HDR Lighting Estimation [9.443561684223514]
We propose a method which combines a parametric light model with 360deg panoramas, ready to use as HDRI in rendering engines.
In our representation, users can easily edit light direction, intensity, number, etc. to impact shading while providing rich, complex reflections while seamlessly blending with the edits.
arXiv Detail & Related papers (2023-04-26T00:20:59Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - Neural Light Field Estimation for Street Scenes with Differentiable
Virtual Object Insertion [129.52943959497665]
Existing works on outdoor lighting estimation typically simplify the scene lighting into an environment map.
We propose a neural approach that estimates the 5D HDR light field from a single image.
We show the benefits of our AR object insertion in an autonomous driving application.
arXiv Detail & Related papers (2022-08-19T17:59:16Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Neural Video Portrait Relighting in Real-time via Consistency Modeling [41.04622998356025]
We propose a neural approach for real-time, high-quality and coherent video portrait relighting.
We propose a hybrid structure and lighting disentanglement in an encoder-decoder architecture.
We also propose a lighting sampling strategy to model the illumination consistency and mutation for natural portrait light manipulation in real-world.
arXiv Detail & Related papers (2021-04-01T14:13:28Z) - Scene relighting with illumination estimation in the latent space on an
encoder-decoder scheme [68.8204255655161]
In this report we present methods that we tried to achieve that goal.
Our models are trained on a rendered dataset of artificial locations with varied scene content, light source location and color temperature.
With this dataset, we used a network with illumination estimation component aiming to infer and replace light conditions in the latent space representation of the concerned scenes.
arXiv Detail & Related papers (2020-06-03T15:25:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.