ShadowNeuS: Neural SDF Reconstruction by Shadow Ray Supervision
- URL: http://arxiv.org/abs/2211.14086v2
- Date: Thu, 23 Mar 2023 14:21:24 GMT
- Title: ShadowNeuS: Neural SDF Reconstruction by Shadow Ray Supervision
- Authors: Jingwang Ling, Zhibo Wang, Feng Xu
- Abstract summary: We propose a novel shadow ray supervision scheme that optimize both the samples along the ray and the ray location.
We successfully reconstruct a neural SDF of the scene from single-view images under multiple lighting conditions.
By further modeling the correlation between the image colors and the shadow rays, our technique can also be effectively extended to RGB inputs.
- Score: 19.441669467054158
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: By supervising camera rays between a scene and multi-view image planes, NeRF
reconstructs a neural scene representation for the task of novel view
synthesis. On the other hand, shadow rays between the light source and the
scene have yet to be considered. Therefore, we propose a novel shadow ray
supervision scheme that optimizes both the samples along the ray and the ray
location. By supervising shadow rays, we successfully reconstruct a neural SDF
of the scene from single-view images under multiple lighting conditions. Given
single-view binary shadows, we train a neural network to reconstruct a complete
scene not limited by the camera's line of sight. By further modeling the
correlation between the image colors and the shadow rays, our technique can
also be effectively extended to RGB inputs. We compare our method with previous
works on challenging tasks of shape reconstruction from single-view binary
shadow or RGB images and observe significant improvements. The code and data
are available at https://github.com/gerwang/ShadowNeuS.
Related papers
- Gaussian Shadow Casting for Neural Characters [20.78790953284832]
We propose a new shadow model using a Gaussian density proxy that replaces sampling with a simple analytic formula.
It supports dynamic motion and is tailored for shadow computation, thereby avoiding the affine projection approximation and sorting required by the closely related Gaussian splatting.
We demonstrate improved reconstructions, with better separation of albedo, shading, and shadows in challenging outdoor scenes with direct sun light and hard shadows.
arXiv Detail & Related papers (2024-01-11T18:50:31Z) - Relit-NeuLF: Efficient Relighting and Novel View Synthesis via Neural 4D
Light Field [69.90548694719683]
We propose an analysis-synthesis approach called Relit-NeuLF.
We first parameterize each ray in a 4D coordinate system, enabling efficient learning and inference.
Comprehensive experiments demonstrate that the proposed method is efficient and effective on both synthetic data and real-world human face data.
arXiv Detail & Related papers (2023-10-23T07:29:51Z) - S$^3$-NeRF: Neural Reflectance Field from Shading and Shadow under a
Single Viewpoint [22.42916940712357]
Our method learns a neural reflectance field to represent the 3D geometry and BRDFs of a scene.
Our method is capable of recovering 3D geometry, including both visible and invisible parts, of a scene from single-view images.
It supports applications like novel-view synthesis and relighting.
arXiv Detail & Related papers (2022-10-17T11:01:52Z) - Towards Learning Neural Representations from Shadows [11.60149896896201]
We present a method that learns neural scene representations from only shadows present in the scene.
Our framework is highly generalizable and can work alongside existing 3D reconstruction techniques.
arXiv Detail & Related papers (2022-03-29T23:13:41Z) - Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition [50.94535765549819]
Decomposing a scene into its shape, reflectance and illumination is a fundamental problem in computer vision and graphics.
We propose a novel reflectance decomposition network that can estimate shape, BRDF, and per-image illumination.
Our decompositions can result in considerably better BRDF and light estimates enabling more accurate novel view-synthesis and relighting.
arXiv Detail & Related papers (2021-10-27T12:17:47Z) - R2D: Learning Shadow Removal to Enhance Fine-Context Shadow Detection [64.10636296274168]
Current shadow detection methods perform poorly when detecting shadow regions that are small, unclear or have blurry edges.
We propose a new method called Restore to Detect (R2D), where a deep neural network is trained for restoration (shadow removal)
We show that our proposed method R2D improves the shadow detection performance while being able to detect fine context better compared to the other recent methods.
arXiv Detail & Related papers (2021-09-20T15:09:22Z) - Neural Rays for Occlusion-aware Image-based Rendering [108.34004858785896]
We present a new neural representation, called Neural Ray (NeuRay), for the novel view synthesis (NVS) task with multi-view images as input.
NeuRay can quickly generate high-quality novel view rendering images of unseen scenes with little finetuning.
arXiv Detail & Related papers (2021-07-28T15:09:40Z) - Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering [60.02806355570514]
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
arXiv Detail & Related papers (2021-06-04T17:54:49Z) - TRANSPR: Transparency Ray-Accumulating Neural 3D Scene Point Renderer [6.320273914694594]
We propose and evaluate a neural point-based graphics method that can model semi-transparent scene parts.
We show that novel views of semi-transparent point cloud scenes can be generated after training with our approach.
arXiv Detail & Related papers (2020-09-06T21:19:18Z) - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [78.5281048849446]
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes.
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network.
Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses.
arXiv Detail & Related papers (2020-03-19T17:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.