Neural Light Field Estimation for Street Scenes with Differentiable
Virtual Object Insertion
- URL: http://arxiv.org/abs/2208.09480v1
- Date: Fri, 19 Aug 2022 17:59:16 GMT
- Title: Neural Light Field Estimation for Street Scenes with Differentiable
Virtual Object Insertion
- Authors: Zian Wang, Wenzheng Chen, David Acuna, Jan Kautz, Sanja Fidler
- Abstract summary: Existing works on outdoor lighting estimation typically simplify the scene lighting into an environment map.
We propose a neural approach that estimates the 5D HDR light field from a single image.
We show the benefits of our AR object insertion in an autonomous driving application.
- Score: 129.52943959497665
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We consider the challenging problem of outdoor lighting estimation for the
goal of photorealistic virtual object insertion into photographs. Existing
works on outdoor lighting estimation typically simplify the scene lighting into
an environment map which cannot capture the spatially-varying lighting effects
in outdoor scenes. In this work, we propose a neural approach that estimates
the 5D HDR light field from a single image, and a differentiable object
insertion formulation that enables end-to-end training with image-based losses
that encourage realism. Specifically, we design a hybrid lighting
representation tailored to outdoor scenes, which contains an HDR sky dome that
handles the extreme intensity of the sun, and a volumetric lighting
representation that models the spatially-varying appearance of the surrounding
scene. With the estimated lighting, our shadow-aware object insertion is fully
differentiable, which enables adversarial training over the composited image to
provide additional supervisory signal to the lighting prediction. We
experimentally demonstrate that our hybrid lighting representation is more
performant than existing outdoor lighting estimation methods. We further show
the benefits of our AR object insertion in an autonomous driving application,
where we obtain performance gains for a 3D object detector when trained on our
augmented data.
Related papers
- A Real-time Method for Inserting Virtual Objects into Neural Radiance
Fields [38.370278809341954]
We present the first real-time method for inserting a rigid virtual object into a neural radiance field.
By exploiting the rich information about lighting and geometry in a NeRF, our method overcomes several challenges of object insertion in augmented reality.
arXiv Detail & Related papers (2023-10-09T16:26:34Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - EverLight: Indoor-Outdoor Editable HDR Lighting Estimation [9.443561684223514]
We propose a method which combines a parametric light model with 360deg panoramas, ready to use as HDRI in rendering engines.
In our representation, users can easily edit light direction, intensity, number, etc. to impact shading while providing rich, complex reflections while seamlessly blending with the edits.
arXiv Detail & Related papers (2023-04-26T00:20:59Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Neural Radiance Fields for Outdoor Scene Relighting [70.97747511934705]
We present NeRF-OSR, the first approach for outdoor scene relighting based on neural radiance fields.
In contrast to the prior art, our technique allows simultaneous editing of both scene illumination and camera viewpoint.
It also includes a dedicated network for shadow reproduction, which is crucial for high-quality outdoor scene relighting.
arXiv Detail & Related papers (2021-12-09T18:59:56Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z) - Spatially-Varying Outdoor Lighting Estimation from Intrinsics [66.04683041837784]
We present SOLID-Net, a neural network for spatially-varying outdoor lighting estimation.
We generate spatially-varying local lighting environment maps by combining global sky environment map with warped image information.
Experiments on both synthetic and real datasets show that SOLID-Net significantly outperforms previous methods.
arXiv Detail & Related papers (2021-04-09T02:28:54Z) - NeRV: Neural Reflectance and Visibility Fields for Relighting and View
Synthesis [45.71507069571216]
We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting.
This produces a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions.
arXiv Detail & Related papers (2020-12-07T18:56:08Z) - Learning Illumination from Diverse Portraits [8.90355885907736]
We train our model using portrait photos paired with their ground truth environmental illumination.
We generate a rich set of such photos by using a light stage to record the reflectance field and alpha matte of 70 diverse subjects.
We show that our technique outperforms the state-of-the-art technique for portrait-based lighting estimation.
arXiv Detail & Related papers (2020-08-05T23:41:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.