Bright-NeRF:Brightening Neural Radiance Field with Color Restoration from Low-light Raw Images
- URL: http://arxiv.org/abs/2412.14547v1
- Date: Thu, 19 Dec 2024 05:55:18 GMT
- Title: Bright-NeRF:Brightening Neural Radiance Field with Color Restoration from Low-light Raw Images
- Authors: Min Wang, Xin Huang, Guoqing Zhou, Qifeng Guo, Qing Wang,
- Abstract summary: We propose a novel approach, Bright-NeRF, which learns enhanced and high-quality radiance fields from low-light raw images in an unsupervised manner.
Our method simultaneously achieves color restoration, denoising, and enhanced novel view synthesis.
- Score: 8.679462472714942
- License:
- Abstract: Neural Radiance Fields (NeRFs) have demonstrated prominent performance in novel view synthesis. However, their input heavily relies on image acquisition under normal light conditions, making it challenging to learn accurate scene representation in low-light environments where images typically exhibit significant noise and severe color distortion. To address these challenges, we propose a novel approach, Bright-NeRF, which learns enhanced and high-quality radiance fields from multi-view low-light raw images in an unsupervised manner. Our method simultaneously achieves color restoration, denoising, and enhanced novel view synthesis. Specifically, we leverage a physically-inspired model of the sensor's response to illumination and introduce a chromatic adaptation loss to constrain the learning of response, enabling consistent color perception of objects regardless of lighting conditions. We further utilize the raw data's properties to expose the scene's intensity automatically. Additionally, we have collected a multi-view low-light raw image dataset to advance research in this field. Experimental results demonstrate that our proposed method significantly outperforms existing 2D and 3D approaches. Our code and dataset will be made publicly available.
Related papers
- NieR: Normal-Based Lighting Scene Rendering [17.421326290704844]
NieR (Normal-Based Lighting Scene Rendering) is a novel framework that takes into account the nuances of light reflection on diverse material surfaces.
We present the LD (Light Decomposition) module, which captures the lighting reflection characteristics on surfaces.
We also propose the HNGD (Hierarchical Normal Gradient Densification) module to overcome the limitations of sparse Gaussian representation.
arXiv Detail & Related papers (2024-05-21T14:24:43Z) - Leveraging Thermal Modality to Enhance Reconstruction in Low-Light Conditions [25.14690752484963]
Neural Radiance Fields (NeRF) accomplishes photo-realistic novel view synthesis by learning the implicit representation of a scene from multi-view images.
Existing approaches reconstruct low-light scenes from raw images but struggle to recover texture and boundary details in dark regions.
We present Thermal-NeRF, which takes thermal and visible raw images as inputs, to accomplish visible and thermal view synthesis simultaneously.
arXiv Detail & Related papers (2024-03-21T00:35:31Z) - Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption [65.96818069005145]
We introduce the concept of a "Concealing Field," which assigns transmittance values to the surrounding air to account for illumination effects.
In dark scenarios, we assume that object emissions maintain a standard lighting level but are attenuated as they traverse the air during the rendering process.
We present a comprehensive multi-view dataset captured under challenging illumination conditions for evaluation.
arXiv Detail & Related papers (2023-12-14T16:24:09Z) - Lighting up NeRF via Unsupervised Decomposition and Enhancement [40.89359754872889]
We propose a novel approach, called Low-Light NeRF (or LLNeRF), to enhance the scene representation and synthesize normal-light novel views directly from sRGB low-light images.
Our method is able to produce novel view images with proper lighting and vivid colors and details, given a collection of camera-finished low dynamic range (8-bits/channel) images from a low-light scene.
arXiv Detail & Related papers (2023-07-20T07:46:34Z) - Enhancing Low-Light Images Using Infrared-Encoded Images [81.8710581927427]
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss.
We propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter.
arXiv Detail & Related papers (2023-07-09T08:29:19Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - Low-light Image Enhancement via Breaking Down the Darkness [8.707025631892202]
This paper presents a novel framework inspired by the divide-and-rule principle.
We propose to convert an image from the RGB space into a luminance-chrominance one.
An adjustable noise suppression network is designed to eliminate noise in the brightened luminance.
The enhanced luminance further serves as guidance for the chrominance mapper to generate realistic colors.
arXiv Detail & Related papers (2021-11-30T16:50:59Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.