A Dark Flash Normal Camera
- URL: http://arxiv.org/abs/2012.06125v1
- Date: Fri, 11 Dec 2020 05:08:22 GMT
- Title: A Dark Flash Normal Camera
- Authors: Zhihao Xia, Jason Lawrence, Supreeth Achar
- Abstract summary: Casual photography is often performed in uncontrolled lighting that can result in low quality images and degrade the performance of downstream processing.
We consider the problem of estimating surface normal and reflectance maps of scenes depicting people despite these conditions by supplementing the available visible illumination with a single near infrared (NIR) light source and camera, a so-called "dark flash image"
Our method takes as input a single color image captured under arbitrary visible lighting and a single dark flash image captured under controlled front-lit NIR lighting at the same viewpoint, and computes a normal map, a diffuse albedo map, and a specular intensity map of the
- Score: 6.686241050151697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Casual photography is often performed in uncontrolled lighting that can
result in low quality images and degrade the performance of downstream
processing. We consider the problem of estimating surface normal and
reflectance maps of scenes depicting people despite these conditions by
supplementing the available visible illumination with a single near infrared
(NIR) light source and camera, a so-called "dark flash image". Our method takes
as input a single color image captured under arbitrary visible lighting and a
single dark flash image captured under controlled front-lit NIR lighting at the
same viewpoint, and computes a normal map, a diffuse albedo map, and a specular
intensity map of the scene. Since ground truth normal and reflectance maps of
faces are difficult to capture, we propose a novel training technique that
combines information from two readily available and complementary sources: a
stereo depth signal and photometric shading cues. We evaluate our method over a
range of subjects and lighting conditions and describe two applications:
optimizing stereo geometry and filling the shadows in an image.
Related papers
- LightIt: Illumination Modeling and Control for Diffusion Models [61.80461416451116]
We introduce LightIt, a method for explicit illumination control for image generation.
Recent generative methods lack lighting control, which is crucial to numerous artistic aspects of image generation.
Our method is the first that enables the generation of images with controllable, consistent lighting.
arXiv Detail & Related papers (2024-03-15T18:26:33Z) - Controllable Light Diffusion for Portraits [8.931046902694984]
We introduce light diffusion, a novel method to improve lighting in portraits.
Inspired by professional photographers' diffusers and scrims, our method softens lighting given only a single portrait photo.
arXiv Detail & Related papers (2023-05-08T14:46:28Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Neural Radiance Fields for Outdoor Scene Relighting [70.97747511934705]
We present NeRF-OSR, the first approach for outdoor scene relighting based on neural radiance fields.
In contrast to the prior art, our technique allows simultaneous editing of both scene illumination and camera viewpoint.
It also includes a dedicated network for shadow reproduction, which is crucial for high-quality outdoor scene relighting.
arXiv Detail & Related papers (2021-12-09T18:59:56Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z) - Towards Geometry Guided Neural Relighting with Flash Photography [26.511476565209026]
We propose a framework for image relighting from a single flash photograph with its corresponding depth map using deep learning.
We experimentally validate the advantage of our geometry guided approach over state-of-the-art image-based approaches in intrinsic image decomposition and image relighting.
arXiv Detail & Related papers (2020-08-12T08:03:28Z) - Deep Photometric Stereo for Non-Lambertian Surfaces [89.05501463107673]
We introduce a fully convolutional deep network for calibrated photometric stereo, which we call PS-FCN.
PS-FCN learns the mapping from reflectance observations to surface normal, and is able to handle surfaces with general and unknown isotropic reflectance.
To deal with the uncalibrated scenario where light directions are unknown, we introduce a new convolutional network, named LCNet, to estimate light directions from input images.
arXiv Detail & Related papers (2020-07-26T15:20:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.