WildLight: In-the-wild Inverse Rendering with a Flashlight
- URL: http://arxiv.org/abs/2303.14190v1
- Date: Fri, 24 Mar 2023 17:59:56 GMT
- Title: WildLight: In-the-wild Inverse Rendering with a Flashlight
- Authors: Ziang Cheng, Junxuan Li, Hongdong Li
- Abstract summary: We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
- Score: 77.31815397135381
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a practical photometric solution for the challenging
problem of in-the-wild inverse rendering under unknown ambient lighting. Our
system recovers scene geometry and reflectance using only multi-view images
captured by a smartphone. The key idea is to exploit smartphone's built-in
flashlight as a minimally controlled light source, and decompose image
intensities into two photometric components -- a static appearance corresponds
to ambient flux, plus a dynamic reflection induced by the moving flashlight.
Our method does not require flash/non-flash images to be captured in pairs.
Building on the success of neural light fields, we use an off-the-shelf method
to capture the ambient reflections, while the flashlight component enables
physically accurate photometric constraints to decouple reflectance and
illumination. Compared to existing inverse rendering methods, our setup is
applicable to non-darkroom environments yet sidesteps the inherent difficulties
of explicit solving ambient reflections. We demonstrate by extensive
experiments that our method is easy to implement, casual to set up, and
consistently outperforms existing in-the-wild inverse rendering techniques.
Finally, our neural reconstruction can be easily exported to PBR textured
triangle mesh ready for industrial renderers.
Related papers
- GaNI: Global and Near Field Illumination Aware Neural Inverse Rendering [21.584362527926654]
GaNI can reconstruct geometry, albedo, and roughness parameters from images of a scene captured with co-located light and camera.
Existing inverse rendering techniques with co-located light-camera focus on single objects only.
arXiv Detail & Related papers (2024-03-22T23:47:19Z) - EverLight: Indoor-Outdoor Editable HDR Lighting Estimation [9.443561684223514]
We propose a method which combines a parametric light model with 360deg panoramas, ready to use as HDRI in rendering engines.
In our representation, users can easily edit light direction, intensity, number, etc. to impact shading while providing rich, complex reflections while seamlessly blending with the edits.
arXiv Detail & Related papers (2023-04-26T00:20:59Z) - Nighttime Smartphone Reflective Flare Removal Using Optical Center
Symmetry Prior [81.64647648269889]
Reflective flare is a phenomenon that occurs when light reflects inside lenses, causing bright spots or a "ghosting effect" in photos.
We propose an optical center symmetry prior, which suggests that the reflective flare and light source are always symmetrical around the lens's optical center.
We create the first reflective flare removal dataset called BracketFlare, which contains diverse and realistic reflective flare patterns.
arXiv Detail & Related papers (2023-03-27T09:44:40Z) - Weakly-supervised Single-view Image Relighting [17.49214457620938]
We present a learning-based approach to relight a single image of Lambertian and low-frequency specular objects.
Our method enables inserting objects from photographs into new scenes and relighting them under the new environment lighting.
arXiv Detail & Related papers (2023-03-24T08:20:16Z) - Self-calibrating Photometric Stereo by Neural Inverse Rendering [88.67603644930466]
This paper tackles the task of uncalibrated photometric stereo for 3D object reconstruction.
We propose a new method that jointly optimize object shape, light directions, and light intensities.
Our method demonstrates state-of-the-art accuracy in light estimation and shape recovery on real-world datasets.
arXiv Detail & Related papers (2022-07-16T02:46:15Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Towards Geometry Guided Neural Relighting with Flash Photography [26.511476565209026]
We propose a framework for image relighting from a single flash photograph with its corresponding depth map using deep learning.
We experimentally validate the advantage of our geometry guided approach over state-of-the-art image-based approaches in intrinsic image decomposition and image relighting.
arXiv Detail & Related papers (2020-08-12T08:03:28Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z) - Deep Reflectance Volumes: Relightable Reconstructions from Multi-View
Photometric Images [59.53382863519189]
We present a deep learning approach to reconstruct scene appearance from unstructured images captured under collocated point lighting.
At the heart of Deep Reflectance Volumes is a novel volumetric scene representation consisting of opacity, surface normal and reflectance voxel grids.
We show that our learned reflectance volumes are editable, allowing for modifying the materials of the captured scenes.
arXiv Detail & Related papers (2020-07-20T05:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.