ZeroRF: Fast Sparse View 360{\deg} Reconstruction with Zero Pretraining
- URL: http://arxiv.org/abs/2312.09249v1
- Date: Thu, 14 Dec 2023 18:59:32 GMT
- Title: ZeroRF: Fast Sparse View 360{\deg} Reconstruction with Zero Pretraining
- Authors: Ruoxi Shi, Xinyue Wei, Cheng Wang, Hao Su
- Abstract summary: Current breakthroughs like Neural Radiance Fields (NeRF) have demonstrated high-fidelity image synthesis but struggle with sparse input views.
We propose ZeroRF, whose key idea is to integrate a tailored Deep Image Prior into a factorized NeRF representation.
Unlike traditional methods, ZeroRF parametrizes feature grids with a neural network generator, enabling efficient sparse view 360deg reconstruction.
- Score: 28.03297623406931
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present ZeroRF, a novel per-scene optimization method addressing the
challenge of sparse view 360{\deg} reconstruction in neural field
representations. Current breakthroughs like Neural Radiance Fields (NeRF) have
demonstrated high-fidelity image synthesis but struggle with sparse input
views. Existing methods, such as Generalizable NeRFs and per-scene optimization
approaches, face limitations in data dependency, computational cost, and
generalization across diverse scenarios. To overcome these challenges, we
propose ZeroRF, whose key idea is to integrate a tailored Deep Image Prior into
a factorized NeRF representation. Unlike traditional methods, ZeroRF
parametrizes feature grids with a neural network generator, enabling efficient
sparse view 360{\deg} reconstruction without any pretraining or additional
regularization. Extensive experiments showcase ZeroRF's versatility and
superiority in terms of both quality and speed, achieving state-of-the-art
results on benchmark datasets. ZeroRF's significance extends to applications in
3D content generation and editing. Project page:
https://sarahweiii.github.io/zerorf/
Related papers
- SimpleNeRF: Regularizing Sparse Input Neural Radiance Fields with
Simpler Solutions [6.9980855647933655]
supervising the depth estimated by the NeRF helps train it effectively with fewer views.
We design augmented models that encourage simpler solutions by exploring the role of positional encoding and view-dependent radiance.
We achieve state-of-the-art view-synthesis performance on two popular datasets by employing the above regularizations.
arXiv Detail & Related papers (2023-09-07T18:02:57Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - ViP-NeRF: Visibility Prior for Sparse Input Neural Radiance Fields [9.67057831710618]
Training neural radiance fields (NeRFs) on sparse input views leads to overfitting and incorrect scene depth estimation.
We reformulate the NeRF to also directly output the visibility of a 3D point from a given viewpoint to reduce the training time with the visibility constraint.
Our model outperforms the competing sparse input NeRF models including those that use learned priors.
arXiv Detail & Related papers (2023-04-28T18:26:23Z) - Non-uniform Sampling Strategies for NeRF on 360{\textdegree} images [40.02598009484401]
This study proposes two novel techniques that effectively build NeRF for 360textdegree omnidirectional images.
We propose two non-uniform ray sampling schemes for NeRF to suit 360textdegree images.
We show that our proposed method enhances the quality of real-world scenes in 360textdegree images.
arXiv Detail & Related papers (2022-12-07T13:48:16Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single
Image [85.43496313628943]
We present a Single View NeRF (SinNeRF) framework consisting of thoughtfully designed semantic and geometry regularizations.
SinNeRF constructs a semi-supervised learning process, where we introduce and propagate geometry pseudo labels.
Experiments are conducted on complex scene benchmarks, including NeRF synthetic dataset, Local Light Field Fusion dataset, and DTU dataset.
arXiv Detail & Related papers (2022-04-02T19:32:42Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z) - Recursive-NeRF: An Efficient and Dynamically Growing NeRF [34.768382663711705]
Recursive-NeRF is an efficient rendering and training approach for the Neural Radiance Field (NeRF) method.
Recursive-NeRF learns uncertainties for query coordinates, representing the quality of the predicted color and volumetric intensity at each level.
Our evaluation on three public datasets shows that Recursive-NeRF is more efficient than NeRF while providing state-of-the-art quality.
arXiv Detail & Related papers (2021-05-19T12:51:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.