CombiNeRF: A Combination of Regularization Techniques for Few-Shot Neural Radiance Field View Synthesis
- URL: http://arxiv.org/abs/2403.14412v1
- Date: Thu, 21 Mar 2024 13:59:00 GMT
- Title: CombiNeRF: A Combination of Regularization Techniques for Few-Shot Neural Radiance Field View Synthesis
- Authors: Matteo Bonotto, Luigi Sarrocco, Daniele Evangelista, Marco Imperoli, Alberto Pretto,
- Abstract summary: Neural Radiance Fields (NeRFs) have shown impressive results for novel view synthesis when a sufficiently large amount of views are available.
We propose CombiNeRF, a framework that synergically combines several regularization techniques, some of them novel, in order to unify the benefits of each.
We show that CombiNeRF outperforms the state-of-the-art methods with few-shot settings in several publicly available datasets.
- Score: 1.374796982212312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRFs) have shown impressive results for novel view synthesis when a sufficiently large amount of views are available. When dealing with few-shot settings, i.e. with a small set of input views, the training could overfit those views, leading to artifacts and geometric and chromatic inconsistencies in the resulting rendering. Regularization is a valid solution that helps NeRF generalization. On the other hand, each of the most recent NeRF regularization techniques aim to mitigate a specific rendering problem. Starting from this observation, in this paper we propose CombiNeRF, a framework that synergically combines several regularization techniques, some of them novel, in order to unify the benefits of each. In particular, we regularize single and neighboring rays distributions and we add a smoothness term to regularize near geometries. After these geometric approaches, we propose to exploit Lipschitz regularization to both NeRF density and color networks and to use encoding masks for input features regularization. We show that CombiNeRF outperforms the state-of-the-art methods with few-shot settings in several publicly available datasets. We also present an ablation study on the LLFF and NeRF-Synthetic datasets that support the choices made. We release with this paper the open-source implementation of our framework.
Related papers
- SGCNeRF: Few-Shot Neural Rendering via Sparse Geometric Consistency Guidance [106.0057551634008]
FreeNeRF attempts to overcome this limitation by integrating implicit geometry regularization.
New study introduces a novel feature matching based sparse geometry regularization module.
module excels in pinpointing high-frequency keypoints, thereby safeguarding the integrity of fine details.
arXiv Detail & Related papers (2024-04-01T08:37:57Z) - ColNeRF: Collaboration for Generalizable Sparse Input Neural Radiance
Field [89.54363625953044]
Collaborative Neural Radiance Fields (ColNeRF) is designed to work with sparse input.
ColNeRF is capable of capturing richer and more generalized scene representation.
Our approach exhibits superiority in fine-tuning towards adapting to new scenes.
arXiv Detail & Related papers (2023-12-14T16:26:46Z) - Re-Nerfing: Improving Novel Views Synthesis through Novel Views Synthesis [80.3686833921072]
Re-Nerfing is a simple and general multi-stage data augmentation approach.
We train a NeRF with the available views, then use the optimized NeRF to synthesize pseudo-views around the original ones.
We also train a second NeRF with both the original images and the pseudo views masking out uncertain regions.
arXiv Detail & Related papers (2023-12-04T18:56:08Z) - Self-NeRF: A Self-Training Pipeline for Few-Shot Neural Radiance Fields [17.725937326348994]
We propose Self-NeRF, a self-evolved NeRF that iteratively refines the radiance fields with very few number of input views.
In each iteration, we label unseen views with the predicted colors or warped pixels generated by the model from the preceding iteration.
These expanded pseudo-views are afflicted by imprecision in color and warping artifacts, which degrades the performance of NeRF.
arXiv Detail & Related papers (2023-03-10T08:22:36Z) - PANeRF: Pseudo-view Augmentation for Improved Neural Radiance Fields
Based on Few-shot Inputs [3.818285175392197]
neural radiance fields (NeRF) have promising applications for novel views of complex scenes.
NeRF requires dense input views, typically numbering in the hundreds, for generating high-quality images.
We propose pseudo-view augmentation of NeRF, a scheme that expands a sufficient amount of data by considering the geometry of few-shot inputs.
arXiv Detail & Related papers (2022-11-23T08:01:10Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single
Image [85.43496313628943]
We present a Single View NeRF (SinNeRF) framework consisting of thoughtfully designed semantic and geometry regularizations.
SinNeRF constructs a semi-supervised learning process, where we introduce and propagate geometry pseudo labels.
Experiments are conducted on complex scene benchmarks, including NeRF synthetic dataset, Local Light Field Fusion dataset, and DTU dataset.
arXiv Detail & Related papers (2022-04-02T19:32:42Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.