Lighting up NeRF via Unsupervised Decomposition and Enhancement
- URL: http://arxiv.org/abs/2307.10664v1
- Date: Thu, 20 Jul 2023 07:46:34 GMT
- Title: Lighting up NeRF via Unsupervised Decomposition and Enhancement
- Authors: Haoyuan Wang, Xiaogang Xu, Ke Xu, Rynson WH. Lau
- Abstract summary: We propose a novel approach, called Low-Light NeRF (or LLNeRF), to enhance the scene representation and synthesize normal-light novel views directly from sRGB low-light images.
Our method is able to produce novel view images with proper lighting and vivid colors and details, given a collection of camera-finished low dynamic range (8-bits/channel) images from a low-light scene.
- Score: 40.89359754872889
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural Radiance Field (NeRF) is a promising approach for synthesizing novel
views, given a set of images and the corresponding camera poses of a scene.
However, images photographed from a low-light scene can hardly be used to train
a NeRF model to produce high-quality results, due to their low pixel
intensities, heavy noise, and color distortion. Combining existing low-light
image enhancement methods with NeRF methods also does not work well due to the
view inconsistency caused by the individual 2D enhancement process. In this
paper, we propose a novel approach, called Low-Light NeRF (or LLNeRF), to
enhance the scene representation and synthesize normal-light novel views
directly from sRGB low-light images in an unsupervised manner. The core of our
approach is a decomposition of radiance field learning, which allows us to
enhance the illumination, reduce noise and correct the distorted colors jointly
with the NeRF optimization process. Our method is able to produce novel view
images with proper lighting and vivid colors and details, given a collection of
camera-finished low dynamic range (8-bits/channel) images from a low-light
scene. Experiments demonstrate that our method outperforms existing low-light
enhancement methods and NeRF methods.
Related papers
- LuSh-NeRF: Lighting up and Sharpening NeRFs for Low-light Scenes [38.59630957057759]
We propose a novel model, named LuSh-NeRF, which can reconstruct a clean and sharp NeRF from a group of hand-held low-light images.
LuSh-NeRF includes a Scene-Noise Decomposition module for decoupling the noise from the scene representation.
Experiments show that LuSh-NeRF outperforms existing approaches.
arXiv Detail & Related papers (2024-11-11T07:22:31Z) - NeRF On-the-go: Exploiting Uncertainty for Distractor-free NeRFs in the Wild [55.154625718222995]
We introduce NeRF On-the-go, a simple yet effective approach that enables the robust synthesis of novel views in complex, in-the-wild scenes.
Our method demonstrates a significant improvement over state-of-the-art techniques.
arXiv Detail & Related papers (2024-05-29T02:53:40Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Camera Relocalization in Shadow-free Neural Radiance Fields [16.359064848532483]
Camera relocalization is a crucial problem in computer vision and robotics.
Recent advancements in neural radiance fields (NeRFs) have shown promise in photo-realistic images.
We propose a two-staged pipeline that normalizes images with varying lighting and shadow conditions to improve camera relocalization.
arXiv Detail & Related papers (2024-05-23T17:41:15Z) - Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields [65.96818069005145]
Vanilla NeRF is viewer-centred simplifies the rendering process only as light emission from 3D locations in the viewing direction.
Inspired by the emission theory of ancient Greeks, we make slight modifications on vanilla NeRF to train on multiple views of low-light scenes.
We introduce a surrogate concept, Concealing Fields, that reduces the transport of light during the volume rendering stage.
arXiv Detail & Related papers (2023-03-10T09:28:09Z) - BAD-NeRF: Bundle Adjusted Deblur Neural Radiance Fields [9.744593647024253]
We present a novel bundle adjusted deblur Neural Radiance Fields (BAD-NeRF)
BAD-NeRF can be robust to severe motion blurred images and inaccurate camera poses.
Our approach models the physical image formation process of a motion blurred image, and jointly learns the parameters of NeRF.
arXiv Detail & Related papers (2022-11-23T10:53:37Z) - NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw
Images [37.917974033687464]
NeRF is a technique for high quality novel view synthesis from a collection of posed input images.
We modify NeRF to instead train directly on linear raw images, preserving the scene's full dynamic range.
We show that NeRF is highly robust to the zero-mean distribution of raw noise.
arXiv Detail & Related papers (2021-11-26T18:59:47Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z) - Low-light Image Restoration with Short- and Long-exposure Raw Pairs [14.643663950015334]
We propose a new low-light image restoration method by using the complementary information of short- and long-exposure images.
We first propose a novel data generation method to synthesize realistic short- and longexposure raw images.
Then, we design a new long-short-exposure fusion network (LSFNet) to deal with the problems of low-light image fusion.
arXiv Detail & Related papers (2020-07-01T03:22:26Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.