DWTNeRF: Boosting Few-shot Neural Radiance Fields via Discrete Wavelet Transform
- URL: http://arxiv.org/abs/2501.12637v2
- Date: Sun, 02 Feb 2025 08:42:39 GMT
- Title: DWTNeRF: Boosting Few-shot Neural Radiance Fields via Discrete Wavelet Transform
- Authors: Hung Nguyen, Blark Runfa Li, Truong Nguyen,
- Abstract summary: We present DWTNeRF, a unified framework based on Instant-NGP's fast-training hash encoding.
It is coupled with regularization terms designed for few-shot NeRF, which operates on sparse training views.
Our approach encourages a re-thinking of current few-shot approaches for fast-converging implicit representations like INGP or 3DGS.
- Score: 3.44306950522716
- License:
- Abstract: Neural Radiance Fields (NeRF) has achieved superior performance in novel view synthesis and 3D scene representation, but its practical applications are hindered by slow convergence and reliance on dense training views. To this end, we present DWTNeRF, a unified framework based on Instant-NGP's fast-training hash encoding. It is coupled with regularization terms designed for few-shot NeRF, which operates on sparse training views. Our DWTNeRF additionally includes a novel Discrete Wavelet loss that allows explicit prioritization of low frequencies directly in the training objective, reducing few-shot NeRF's overfitting on high frequencies in earlier training stages. We also introduce a model-based approach, based on multi-head attention, that is compatible with INGP, which are sensitive to architectural changes. On the 3-shot LLFF benchmark, DWTNeRF outperforms Vanilla INGP by 15.07% in PSNR, 24.45% in SSIM and 36.30% in LPIPS. Our approach encourages a re-thinking of current few-shot approaches for fast-converging implicit representations like INGP or 3DGS.
Related papers
- Rate-aware Compression for NeRF-based Volumetric Video [21.372568857027748]
radiance fields (NeRF) have advanced the development of 3D volumetric video technology.
Existing solutions compress NeRF representations after the training stage, leading to a separation between representation training and compression.
In this paper, we try to directly learn a compact NeRF representation for volumetric video in the training stage based on the proposed rate-aware compression framework.
arXiv Detail & Related papers (2024-11-08T04:29:14Z) - Few-shot NeRF by Adaptive Rendering Loss Regularization [78.50710219013301]
Novel view synthesis with sparse inputs poses great challenges to Neural Radiance Field (NeRF)
Recent works demonstrate that the frequency regularization of Positional rendering can achieve promising results for few-shot NeRF.
We propose Adaptive Rendering loss regularization for few-shot NeRF, dubbed AR-NeRF.
arXiv Detail & Related papers (2024-10-23T13:05:26Z) - Spatial Annealing for Efficient Few-shot Neural Rendering [73.49548565633123]
We introduce an accurate and efficient few-shot neural rendering method named textbfSpatial textbfAnnealing regularized textbfNeRF (textbfSANeRF)
By adding merely one line of code, SANeRF delivers superior rendering quality and much faster reconstruction speed compared to current few-shot neural rendering methods.
arXiv Detail & Related papers (2024-06-12T02:48:52Z) - Prompt2NeRF-PIL: Fast NeRF Generation via Pretrained Implicit Latent [61.56387277538849]
This paper explores promptable NeRF generation for direct conditioning and fast generation of NeRF parameters for the underlying 3D scenes.
Prompt2NeRF-PIL is capable of generating a variety of 3D objects with a single forward pass.
We will show that our approach speeds up the text-to-NeRF model DreamFusion and the 3D reconstruction speed of the image-to-NeRF method Zero-1-to-3 by 3 to 5 times.
arXiv Detail & Related papers (2023-12-05T08:32:46Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency
Regularization [32.1581416980828]
We present Frequency regularized NeRF (FreeNeRF), a surprisingly simple baseline that outperforms previous methods.
We analyze the key challenges in few-shot neural rendering and find that frequency plays an important role in NeRF's training.
arXiv Detail & Related papers (2023-03-13T18:59:03Z) - Compressing Explicit Voxel Grid Representations: fast NeRFs become also
small [3.1473798197405944]
Re:NeRF aims to reduce memory storage of NeRF models while maintaining comparable performance.
We benchmark our approach with three different EVG-NeRF architectures on four popular benchmarks.
arXiv Detail & Related papers (2022-10-23T16:42:29Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.