LuSh-NeRF: Lighting up and Sharpening NeRFs for Low-light Scenes
- URL: http://arxiv.org/abs/2411.06757v1
- Date: Mon, 11 Nov 2024 07:22:31 GMT
- Title: LuSh-NeRF: Lighting up and Sharpening NeRFs for Low-light Scenes
- Authors: Zefan Qu, Ke Xu, Gerhard Petrus Hancke, Rynson W. H. Lau,
- Abstract summary: We propose a novel model, named LuSh-NeRF, which can reconstruct a clean and sharp NeRF from a group of hand-held low-light images.
LuSh-NeRF includes a Scene-Noise Decomposition module for decoupling the noise from the scene representation.
Experiments show that LuSh-NeRF outperforms existing approaches.
- Score: 38.59630957057759
- License:
- Abstract: Neural Radiance Fields (NeRFs) have shown remarkable performances in producing novel-view images from high-quality scene images. However, hand-held low-light photography challenges NeRFs as the captured images may simultaneously suffer from low visibility, noise, and camera shakes. While existing NeRF methods may handle either low light or motion, directly combining them or incorporating additional image-based enhancement methods does not work as these degradation factors are highly coupled. We observe that noise in low-light images is always sharp regardless of camera shakes, which implies an implicit order of these degradation factors within the image formation process. To this end, we propose in this paper a novel model, named LuSh-NeRF, which can reconstruct a clean and sharp NeRF from a group of hand-held low-light images. The key idea of LuSh-NeRF is to sequentially model noise and blur in the images via multi-view feature consistency and frequency information of NeRF, respectively. Specifically, LuSh-NeRF includes a novel Scene-Noise Decomposition (SND) module for decoupling the noise from the scene representation and a novel Camera Trajectory Prediction (CTP) module for the estimation of camera motions based on low-frequency scene information. To facilitate training and evaluations, we construct a new dataset containing both synthetic and real images. Experiments show that LuSh-NeRF outperforms existing approaches. Our code and dataset can be found here: https://github.com/quzefan/LuSh-NeRF.
Related papers
- CF-NeRF: Camera Parameter Free Neural Radiance Fields with Incremental
Learning [23.080474939586654]
We propose a novel underlinecamera parameter underlinefree neural radiance field (CF-NeRF)
CF-NeRF incrementally reconstructs 3D representations and recovers the camera parameters inspired by incremental structure from motion.
Results demonstrate that CF-NeRF is robust to camera rotation and achieves state-of-the-art results without providing prior information and constraints.
arXiv Detail & Related papers (2023-12-14T09:09:31Z) - USB-NeRF: Unrolling Shutter Bundle Adjusted Neural Radiance Fields [7.671858441929298]
We propose Unrolling Shutter Bundle Adjusted Neural Radiance Fields (USB-NeRF)
USB-NeRF is able to correct rolling shutter distortions and recover accurate camera motion trajectory simultaneously under the framework of NeRF.
Our algorithm can also be used to recover high-fidelity high frame-rate global shutter video from a sequence of RS images.
arXiv Detail & Related papers (2023-10-04T09:51:58Z) - Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion [67.15935067326662]
Event cameras offer low power, low latency, high temporal resolution and high dynamic range.
NeRF is seen as the leading candidate for efficient and effective scene representation.
We propose Robust e-NeRF, a novel method to directly and robustly reconstruct NeRFs from moving event cameras.
arXiv Detail & Related papers (2023-09-15T17:52:08Z) - Lighting up NeRF via Unsupervised Decomposition and Enhancement [40.89359754872889]
We propose a novel approach, called Low-Light NeRF (or LLNeRF), to enhance the scene representation and synthesize normal-light novel views directly from sRGB low-light images.
Our method is able to produce novel view images with proper lighting and vivid colors and details, given a collection of camera-finished low dynamic range (8-bits/channel) images from a low-light scene.
arXiv Detail & Related papers (2023-07-20T07:46:34Z) - From NeRFLiX to NeRFLiX++: A General NeRF-Agnostic Restorer Paradigm [57.73868344064043]
We propose NeRFLiX, a general NeRF-agnostic restorer paradigm that learns a degradation-driven inter-viewpoint mixer.
We also present NeRFLiX++ with a stronger two-stage NeRF degradation simulator and a faster inter-viewpoint mixer.
NeRFLiX++ is capable of restoring photo-realistic ultra-high-resolution outputs from noisy low-resolution NeRF-rendered views.
arXiv Detail & Related papers (2023-06-10T09:19:19Z) - BAD-NeRF: Bundle Adjusted Deblur Neural Radiance Fields [9.744593647024253]
We present a novel bundle adjusted deblur Neural Radiance Fields (BAD-NeRF)
BAD-NeRF can be robust to severe motion blurred images and inaccurate camera poses.
Our approach models the physical image formation process of a motion blurred image, and jointly learns the parameters of NeRF.
arXiv Detail & Related papers (2022-11-23T10:53:37Z) - E-NeRF: Neural Radiance Fields from a Moving Event Camera [83.91656576631031]
Estimating neural radiance fields (NeRFs) from ideal images has been extensively studied in the computer vision community.
We present E-NeRF, the first method which estimates a volumetric scene representation in the form of a NeRF from a fast-moving event camera.
arXiv Detail & Related papers (2022-08-24T04:53:32Z) - NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw
Images [37.917974033687464]
NeRF is a technique for high quality novel view synthesis from a collection of posed input images.
We modify NeRF to instead train directly on linear raw images, preserving the scene's full dynamic range.
We show that NeRF is highly robust to the zero-mean distribution of raw noise.
arXiv Detail & Related papers (2021-11-26T18:59:47Z) - BARF: Bundle-Adjusting Neural Radiance Fields [104.97810696435766]
We propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect camera poses.
BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems.
arXiv Detail & Related papers (2021-04-13T17:59:51Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.