Sharp-NeRF: Grid-based Fast Deblurring Neural Radiance Fields Using
Sharpness Prior
- URL: http://arxiv.org/abs/2401.00825v1
- Date: Mon, 1 Jan 2024 17:48:38 GMT
- Title: Sharp-NeRF: Grid-based Fast Deblurring Neural Radiance Fields Using
Sharpness Prior
- Authors: Byeonghyeon Lee, Howoong Lee, Usman Ali, Eunbyung Park
- Abstract summary: Sharp-NeRF is a technique that renders clean and sharp images from the input blurry images within half an hour of training.
We have conducted experiments on the benchmarks consisting of blurry images and have evaluated full-reference and non-reference metrics.
Our approach renders the sharp novel views with vivid colors and fine details, and it has considerably faster training time than the previous works.
- Score: 4.602333448154979
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRF) have shown remarkable performance in neural
rendering-based novel view synthesis. However, NeRF suffers from severe visual
quality degradation when the input images have been captured under imperfect
conditions, such as poor illumination, defocus blurring, and lens aberrations.
Especially, defocus blur is quite common in the images when they are normally
captured using cameras. Although few recent studies have proposed to render
sharp images of considerably high-quality, yet they still face many key
challenges. In particular, those methods have employed a Multi-Layer Perceptron
(MLP) based NeRF, which requires tremendous computational time. To overcome
these shortcomings, this paper proposes a novel technique Sharp-NeRF -- a
grid-based NeRF that renders clean and sharp images from the input blurry
images within half an hour of training. To do so, we used several grid-based
kernels to accurately model the sharpness/blurriness of the scene. The
sharpness level of the pixels is computed to learn the spatially varying blur
kernels. We have conducted experiments on the benchmarks consisting of blurry
images and have evaluated full-reference and non-reference metrics. The
qualitative and quantitative results have revealed that our approach renders
the sharp novel views with vivid colors and fine details, and it has
considerably faster training time than the previous works. Our project page is
available at https://benhenryl.github.io/SharpNeRF/
Related papers
- LuSh-NeRF: Lighting up and Sharpening NeRFs for Low-light Scenes [38.59630957057759]
We propose a novel model, named LuSh-NeRF, which can reconstruct a clean and sharp NeRF from a group of hand-held low-light images.
LuSh-NeRF includes a Scene-Noise Decomposition module for decoupling the noise from the scene representation.
Experiments show that LuSh-NeRF outperforms existing approaches.
arXiv Detail & Related papers (2024-11-11T07:22:31Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - Self-NeRF: A Self-Training Pipeline for Few-Shot Neural Radiance Fields [17.725937326348994]
We propose Self-NeRF, a self-evolved NeRF that iteratively refines the radiance fields with very few number of input views.
In each iteration, we label unseen views with the predicted colors or warped pixels generated by the model from the preceding iteration.
These expanded pseudo-views are afflicted by imprecision in color and warping artifacts, which degrades the performance of NeRF.
arXiv Detail & Related papers (2023-03-10T08:22:36Z) - RobustNeRF: Ignoring Distractors with Robust Losses [32.2329459013342]
We advocate a form of robust estimation for NeRF training, modeling distractors in training data as outliers of an optimization problem.
Our method successfully removes outliers from a scene and improves upon our baselines, on synthetic and real-world scenes.
arXiv Detail & Related papers (2023-02-02T02:45:23Z) - MEIL-NeRF: Memory-Efficient Incremental Learning of Neural Radiance
Fields [49.68916478541697]
We develop a Memory-Efficient Incremental Learning algorithm for NeRF (MEIL-NeRF)
MEIL-NeRF takes inspiration from NeRF itself in that a neural network can serve as a memory that provides the pixel RGB values, given rays as queries.
As a result, MEIL-NeRF demonstrates constant memory consumption and competitive performance.
arXiv Detail & Related papers (2022-12-16T08:04:56Z) - BAD-NeRF: Bundle Adjusted Deblur Neural Radiance Fields [9.744593647024253]
We present a novel bundle adjusted deblur Neural Radiance Fields (BAD-NeRF)
BAD-NeRF can be robust to severe motion blurred images and inaccurate camera poses.
Our approach models the physical image formation process of a motion blurred image, and jointly learns the parameters of NeRF.
arXiv Detail & Related papers (2022-11-23T10:53:37Z) - E-NeRF: Neural Radiance Fields from a Moving Event Camera [83.91656576631031]
Estimating neural radiance fields (NeRFs) from ideal images has been extensively studied in the computer vision community.
We present E-NeRF, the first method which estimates a volumetric scene representation in the form of a NeRF from a fast-moving event camera.
arXiv Detail & Related papers (2022-08-24T04:53:32Z) - View Synthesis with Sculpted Neural Points [64.40344086212279]
Implicit neural representations have achieved impressive visual quality but have drawbacks in computational efficiency.
We propose a new approach that performs view synthesis using point clouds.
It is the first point-based method to achieve better visual quality than NeRF while being more than 100x faster in rendering speed.
arXiv Detail & Related papers (2022-05-12T03:54:35Z) - R2L: Distilling Neural Radiance Field to Neural Light Field for
Efficient Novel View Synthesis [76.07010495581535]
Rendering a single pixel requires querying the Neural Radiance Field network hundreds of times.
NeLF presents a more straightforward representation over NeRF in novel view.
We show the key to successfully learning a deep NeLF network is to have sufficient data.
arXiv Detail & Related papers (2022-03-31T17:57:05Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.