Towards Learning Neural Representations from Shadows
- URL: http://arxiv.org/abs/2203.15946v1
- Date: Tue, 29 Mar 2022 23:13:41 GMT
- Title: Towards Learning Neural Representations from Shadows
- Authors: Kushagra Tiwary, Tzofi Klinghoffer and Ramesh Raskar
- Abstract summary: We present a method that learns neural scene representations from only shadows present in the scene.
Our framework is highly generalizable and can work alongside existing 3D reconstruction techniques.
- Score: 11.60149896896201
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a method that learns neural scene representations from only
shadows present in the scene. While traditional shape-from-shadow (SfS)
algorithms reconstruct geometry from shadows, they assume a fixed scanning
setup and fail to generalize to complex scenes. Neural rendering algorithms, on
the other hand, rely on photometric consistency between RGB images but largely
ignore physical cues such as shadows, which have been shown to provide valuable
information about the scene. We observe that shadows are a powerful cue that
can constrain neural scene representations to learn SfS, and even outperform
NeRF to reconstruct otherwise hidden geometry. We propose a graphics-inspired
differentiable approach to render accurate shadows with volumetric rendering,
predicting a shadow map that can be compared to the ground truth shadow. Even
with just binary shadow maps, we show that neural rendering can localize the
object and estimate coarse geometry. Our approach reveals that sparse cues in
images can be used to estimate geometry using differentiable volumetric
rendering. Moreover, our framework is highly generalizable and can work
alongside existing 3D reconstruction techniques that otherwise only use
photometric consistency. Our code is made available in our supplementary
materials.
Related papers
- Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - ShadowNeuS: Neural SDF Reconstruction by Shadow Ray Supervision [19.441669467054158]
We propose a novel shadow ray supervision scheme that optimize both the samples along the ray and the ray location.
We successfully reconstruct a neural SDF of the scene from single-view images under multiple lighting conditions.
By further modeling the correlation between the image colors and the shadow rays, our technique can also be effectively extended to RGB inputs.
arXiv Detail & Related papers (2022-11-25T13:14:56Z) - S$^3$-NeRF: Neural Reflectance Field from Shading and Shadow under a
Single Viewpoint [22.42916940712357]
Our method learns a neural reflectance field to represent the 3D geometry and BRDFs of a scene.
Our method is capable of recovering 3D geometry, including both visible and invisible parts, of a scene from single-view images.
It supports applications like novel-view synthesis and relighting.
arXiv Detail & Related papers (2022-10-17T11:01:52Z) - Controllable Shadow Generation Using Pixel Height Maps [58.59256060452418]
Physics-based shadow rendering methods require 3D geometries, which are not always available.
Deep learning-based shadow synthesis methods learn a mapping from the light information to an object's shadow without explicitly modeling the shadow geometry.
We introduce pixel heigh, a novel geometry representation that encodes the correlations between objects, ground, and camera pose.
arXiv Detail & Related papers (2022-07-12T08:29:51Z) - DeepShadow: Neural Shape from Shadow [12.283891012446647]
DeepShadow is a one-shot method for recovering the depth map and surface normals from photometric stereo shadow maps.
We show that the self and cast shadows not only do not disturb 3D reconstruction, but can be used alone, as a strong learning signal.
Our method is the first to reconstruct 3D shape-from-shadows using neural networks.
arXiv Detail & Related papers (2022-03-28T20:11:15Z) - Advances in Neural Rendering [115.05042097988768]
This report focuses on methods that combine classical rendering with learned 3D scene representations.
A key advantage of these methods is that they are 3D-consistent by design, enabling applications such as novel viewpoint of a captured scene.
In addition to methods that handle static scenes, we cover neural scene representations for modeling non-rigidly deforming objects.
arXiv Detail & Related papers (2021-11-10T18:57:01Z) - R2D: Learning Shadow Removal to Enhance Fine-Context Shadow Detection [64.10636296274168]
Current shadow detection methods perform poorly when detecting shadow regions that are small, unclear or have blurry edges.
We propose a new method called Restore to Detect (R2D), where a deep neural network is trained for restoration (shadow removal)
We show that our proposed method R2D improves the shadow detection performance while being able to detect fine context better compared to the other recent methods.
arXiv Detail & Related papers (2021-09-20T15:09:22Z) - Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering [60.02806355570514]
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
arXiv Detail & Related papers (2021-06-04T17:54:49Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z) - SSN: Soft Shadow Network for Image Compositing [26.606890595862826]
We introduce an interactive Soft Shadow Network (SSN) to generates controllable soft shadows for image compositing.
SSN takes a 2D object mask as input and thus is agnostic to image types such as painting and vector art.
An environment light map is used to control the shadow's characteristics, such as angle and softness.
arXiv Detail & Related papers (2020-07-16T09:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.