UNeRF: Time and Memory Conscious U-Shaped Network for Training Neural
Radiance Fields
- URL: http://arxiv.org/abs/2206.11952v1
- Date: Thu, 23 Jun 2022 19:57:07 GMT
- Title: UNeRF: Time and Memory Conscious U-Shaped Network for Training Neural
Radiance Fields
- Authors: Abiramy Kuganesan, Shih-yang Su, James J. Little, Helge Rhodin
- Abstract summary: Neural Radiance Fields (NeRFs) increase reconstruction detail for novel view synthesis and scene reconstruction.
However, the increased resolution and model-free nature of such neural fields come at the cost of high training times and excessive memory requirements.
We propose a method to exploit the redundancy of NeRF's sample-based computations by partially sharing evaluations across neighboring sample points.
- Score: 16.826691448973367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRFs) increase reconstruction detail for novel view
synthesis and scene reconstruction, with applications ranging from large static
scenes to dynamic human motion. However, the increased resolution and
model-free nature of such neural fields come at the cost of high training times
and excessive memory requirements. Recent advances improve the inference time
by using complementary data structures yet these methods are ill-suited for
dynamic scenes and often increase memory consumption. Little has been done to
reduce the resources required at training time. We propose a method to exploit
the redundancy of NeRF's sample-based computations by partially sharing
evaluations across neighboring sample points. Our UNeRF architecture is
inspired by the UNet, where spatial resolution is reduced in the middle of the
network and information is shared between adjacent samples. Although this
change violates the strict and conscious separation of view-dependent
appearance and view-independent density estimation in the NeRF method, we show
that it improves novel view synthesis. We also introduce an alternative
subsampling strategy which shares computation while minimizing any violation of
view invariance. UNeRF is a plug-in module for the original NeRF network. Our
major contributions include reduction of the memory footprint, improved
accuracy, and reduced amortized processing time both during training and
inference. With only weak assumptions on locality, we achieve improved resource
utilization on a variety of neural radiance fields tasks. We demonstrate
applications to the novel view synthesis of static scenes as well as dynamic
human shape and motion.
Related papers
- Few-shot NeRF by Adaptive Rendering Loss Regularization [78.50710219013301]
Novel view synthesis with sparse inputs poses great challenges to Neural Radiance Field (NeRF)
Recent works demonstrate that the frequency regularization of Positional rendering can achieve promising results for few-shot NeRF.
We propose Adaptive Rendering loss regularization for few-shot NeRF, dubbed AR-NeRF.
arXiv Detail & Related papers (2024-10-23T13:05:26Z) - Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields [54.8553158441296]
We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
arXiv Detail & Related papers (2023-12-17T08:30:00Z) - VQ-NeRF: Vector Quantization Enhances Implicit Neural Representations [25.88881764546414]
VQ-NeRF is an efficient pipeline for enhancing implicit neural representations via vector quantization.
We present an innovative multi-scale NeRF sampling scheme that concurrently optimize the NeRF model at both compressed and original scales.
We incorporate a semantic loss function to improve the geometric fidelity and semantic coherence of our 3D reconstructions.
arXiv Detail & Related papers (2023-10-23T01:41:38Z) - Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos [69.22032459870242]
We present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time free-view rendering on long-duration dynamic scenes.
We show such a strategy can handle large motions without sacrificing quality.
Based on ReRF, we design a special FVV that achieves three orders of magnitudes compression rate and provides a companion ReRF player to support online streaming of long-duration FVVs of dynamic scenes.
arXiv Detail & Related papers (2023-04-10T08:36:00Z) - ActiveNeRF: Learning where to See with Uncertainty Estimation [36.209200774203005]
Recently, Neural Radiance Fields (NeRF) has shown promising performances on reconstructing 3D scenes and synthesizing novel views from a sparse set of 2D images.
We present a novel learning framework, ActiveNeRF, aiming to model a 3D scene with a constrained input budget.
arXiv Detail & Related papers (2022-09-18T12:09:15Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - Towards Lightweight Controllable Audio Synthesis with Conditional
Implicit Neural Representations [10.484851004093919]
Implicit neural representations (INRs) are neural networks used to approximate low-dimensional functions.
In this work we shed light on the potential of Conditional Implicit Neural Representations (CINRs) as lightweight backbones in generative frameworks for audio synthesis.
arXiv Detail & Related papers (2021-11-14T13:36:18Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.