Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields
- URL: http://arxiv.org/abs/2303.05807v2
- Date: Sat, 30 Dec 2023 02:42:12 GMT
- Title: Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields
- Authors: Ziteng Cui, Lin Gu, Xiao Sun, Xianzheng Ma, Yu Qiao, Tatsuya Harada
- Abstract summary: Vanilla NeRF is viewer-centred simplifies the rendering process only as light emission from 3D locations in the viewing direction.
Inspired by the emission theory of ancient Greeks, we make slight modifications on vanilla NeRF to train on multiple views of low-light scenes.
We introduce a surrogate concept, Concealing Fields, that reduces the transport of light during the volume rendering stage.
- Score: 65.96818069005145
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Common capture low-light scenes are challenging for most computer vision
techniques, including Neural Radiance Fields (NeRF). Vanilla NeRF is
viewer-centred simplifies the rendering process only as light emission from 3D
locations in the viewing direction, thus failing to model the low-illumination
induced darkness. Inspired by the emission theory of ancient Greeks that visual
perception is accomplished by rays casting from eyes, we make slight
modifications on vanilla NeRF to train on multiple views of low-light scenes,
we can thus render out the well-lit scene in an unsupervised manner. We
introduce a surrogate concept, Concealing Fields, that reduces the transport of
light during the volume rendering stage. Specifically, our proposed method,
Aleth-NeRF, directly learns from the dark image to understand volumetric object
representation and concealing field under priors. By simply eliminating
Concealing Fields, we can render a single or multi-view well-lit image(s) and
gain superior performance over other 2D low-light enhancement methods.
Additionally, we collect the first paired LOw-light and normal-light Multi-view
(LOM) datasets for future research. This version is invalid, please refer to
our new AAAI version: arXiv:2312.09093
Related papers
- PlatoNeRF: 3D Reconstruction in Plato's Cave via Single-View Two-Bounce Lidar [25.332440946211236]
3D reconstruction from a single-view is challenging because of the ambiguity from monocular cues and lack of information about occluded regions.
We propose using time-of-flight data captured by a single-photon avalanche diode to overcome these limitations.
We demonstrate that we can reconstruct visible and occluded geometry without data priors or reliance on controlled ambient lighting or scene albedo.
arXiv Detail & Related papers (2023-12-21T18:59:53Z) - Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption [65.96818069005145]
We introduce the concept of a "Concealing Field," which assigns transmittance values to the surrounding air to account for illumination effects.
In dark scenarios, we assume that object emissions maintain a standard lighting level but are attenuated as they traverse the air during the rendering process.
We present a comprehensive multi-view dataset captured under challenging illumination conditions for evaluation.
arXiv Detail & Related papers (2023-12-14T16:24:09Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - ABLE-NeRF: Attention-Based Rendering with Learnable Embeddings for
Neural Radiance Field [20.986012773294714]
We present an alternative to the physics-based VR approach by introducing a self-attention-based framework on volumes along a ray.
Our method, which we call ABLE-NeRF, significantly reduces blurry' glossy surfaces in rendering and produces realistic translucent surfaces which lack in prior art.
arXiv Detail & Related papers (2023-03-24T05:34:39Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Ray Priors through Reprojection: Improving Neural Radiance Fields for
Novel View Extrapolation [35.47411859184933]
We study the novel view extrapolation setting that (1) the training images can well describe an object, and (2) there is a notable discrepancy between the training and test viewpoints' distributions.
We propose a random ray casting policy that allows training unseen views using seen views.
A ray atlas pre-computed from the observed rays' viewing directions could further enhance the rendering quality for extrapolated views.
arXiv Detail & Related papers (2022-05-12T07:21:17Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.