CombiNeRF: A Combination of Regularization Techniques for Few-Shot Neural Radiance Field View Synthesis
- URL: http://arxiv.org/abs/2403.14412v1
- Date: Thu, 21 Mar 2024 13:59:00 GMT
- Title: CombiNeRF: A Combination of Regularization Techniques for Few-Shot Neural Radiance Field View Synthesis
- Authors: Matteo Bonotto, Luigi Sarrocco, Daniele Evangelista, Marco Imperoli, Alberto Pretto,
- Abstract summary: Neural Radiance Fields (NeRFs) have shown impressive results for novel view synthesis when a sufficiently large amount of views are available.
We propose CombiNeRF, a framework that synergically combines several regularization techniques, some of them novel, in order to unify the benefits of each.
We show that CombiNeRF outperforms the state-of-the-art methods with few-shot settings in several publicly available datasets.
- Score: 1.374796982212312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRFs) have shown impressive results for novel view synthesis when a sufficiently large amount of views are available. When dealing with few-shot settings, i.e. with a small set of input views, the training could overfit those views, leading to artifacts and geometric and chromatic inconsistencies in the resulting rendering. Regularization is a valid solution that helps NeRF generalization. On the other hand, each of the most recent NeRF regularization techniques aim to mitigate a specific rendering problem. Starting from this observation, in this paper we propose CombiNeRF, a framework that synergically combines several regularization techniques, some of them novel, in order to unify the benefits of each. In particular, we regularize single and neighboring rays distributions and we add a smoothness term to regularize near geometries. After these geometric approaches, we propose to exploit Lipschitz regularization to both NeRF density and color networks and to use encoding masks for input features regularization. We show that CombiNeRF outperforms the state-of-the-art methods with few-shot settings in several publicly available datasets. We also present an ablation study on the LLFF and NeRF-Synthetic datasets that support the choices made. We release with this paper the open-source implementation of our framework.
Related papers
- Few-shot NeRF by Adaptive Rendering Loss Regularization [78.50710219013301]
Novel view synthesis with sparse inputs poses great challenges to Neural Radiance Field (NeRF)
Recent works demonstrate that the frequency regularization of Positional rendering can achieve promising results for few-shot NeRF.
We propose Adaptive Rendering loss regularization for few-shot NeRF, dubbed AR-NeRF.
arXiv Detail & Related papers (2024-10-23T13:05:26Z) - TrackNeRF: Bundle Adjusting NeRF from Sparse and Noisy Views via Feature Tracks [57.73997367306271]
TrackNeRF sets a new benchmark in noisy and sparse view reconstruction.
TrackNeRF shows significant improvements over the state-of-the-art BARF and SPARF.
arXiv Detail & Related papers (2024-08-20T11:14:23Z) - SGCNeRF: Few-Shot Neural Rendering via Sparse Geometric Consistency Guidance [106.0057551634008]
FreeNeRF attempts to overcome this limitation by integrating implicit geometry regularization.
New study introduces a novel feature matching based sparse geometry regularization module.
module excels in pinpointing high-frequency keypoints, thereby safeguarding the integrity of fine details.
arXiv Detail & Related papers (2024-04-01T08:37:57Z) - Re-Nerfing: Improving Novel View Synthesis through Novel View Synthesis [80.3686833921072]
Recent neural rendering and reconstruction techniques, such as NeRFs or Gaussian Splatting, have shown remarkable novel view synthesis capabilities.
With fewer images available, these methods start to fail since they can no longer correctly triangulate the underlying 3D geometry.
We propose Re-Nerfing, a simple and general add-on approach that leverages novel view synthesis itself to tackle this problem.
arXiv Detail & Related papers (2023-12-04T18:56:08Z) - Self-NeRF: A Self-Training Pipeline for Few-Shot Neural Radiance Fields [17.725937326348994]
We propose Self-NeRF, a self-evolved NeRF that iteratively refines the radiance fields with very few number of input views.
In each iteration, we label unseen views with the predicted colors or warped pixels generated by the model from the preceding iteration.
These expanded pseudo-views are afflicted by imprecision in color and warping artifacts, which degrades the performance of NeRF.
arXiv Detail & Related papers (2023-03-10T08:22:36Z) - PANeRF: Pseudo-view Augmentation for Improved Neural Radiance Fields
Based on Few-shot Inputs [3.818285175392197]
neural radiance fields (NeRF) have promising applications for novel views of complex scenes.
NeRF requires dense input views, typically numbering in the hundreds, for generating high-quality images.
We propose pseudo-view augmentation of NeRF, a scheme that expands a sufficient amount of data by considering the geometry of few-shot inputs.
arXiv Detail & Related papers (2022-11-23T08:01:10Z) - SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single
Image [85.43496313628943]
We present a Single View NeRF (SinNeRF) framework consisting of thoughtfully designed semantic and geometry regularizations.
SinNeRF constructs a semi-supervised learning process, where we introduce and propagate geometry pseudo labels.
Experiments are conducted on complex scene benchmarks, including NeRF synthetic dataset, Local Light Field Fusion dataset, and DTU dataset.
arXiv Detail & Related papers (2022-04-02T19:32:42Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.