Stable Surface Regularization for Fast Few-Shot NeRF
- URL: http://arxiv.org/abs/2403.19985v1
- Date: Fri, 29 Mar 2024 05:39:47 GMT
- Title: Stable Surface Regularization for Fast Few-Shot NeRF
- Authors: Byeongin Joung, Byeong-Uk Lee, Jaesung Choe, Ukcheol Shin, Minjun Kang, Taeyeop Lee, In So Kweon, Kuk-Jin Yoon,
- Abstract summary: We develop a stable surface regularization technique called Annealing Signed Distance Function (ASDF)
We observe that the Eikonal loss requires dense training signal to shape different level-sets of SDF, leading to low-fidelity results under few-shot training.
The proposed approach is up to 45 times faster than existing few-shot novel view synthesis methods.
- Score: 76.00444039563581
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes an algorithm for synthesizing novel views under few-shot setup. The main concept is to develop a stable surface regularization technique called Annealing Signed Distance Function (ASDF), which anneals the surface in a coarse-to-fine manner to accelerate convergence speed. We observe that the Eikonal loss - which is a widely known geometric regularization - requires dense training signal to shape different level-sets of SDF, leading to low-fidelity results under few-shot training. In contrast, the proposed surface regularization successfully reconstructs scenes and produce high-fidelity geometry with stable training. Our method is further accelerated by utilizing grid representation and monocular geometric priors. Finally, the proposed approach is up to 45 times faster than existing few-shot novel view synthesis methods, and it produces comparable results in the ScanNet dataset and NeRF-Real dataset.
Related papers
- Efficient Depth-Guided Urban View Synthesis [52.841803876653465]
We introduce Efficient Depth-Guided Urban View Synthesis (EDUS) for fast feed-forward inference and efficient per-scene fine-tuning.
EDUS exploits noisy predicted geometric priors as guidance to enable generalizable urban view synthesis from sparse input images.
Our results indicate that EDUS achieves state-of-the-art performance in sparse view settings when combined with fast test-time optimization.
arXiv Detail & Related papers (2024-07-17T08:16:25Z) - RaNeuS: Ray-adaptive Neural Surface Reconstruction [87.20343320266215]
We leverage a differentiable radiance field eg NeRF to reconstruct detailed 3D surfaces in addition to producing novel view renderings.
Considering that different methods formulate and optimize the projection from SDF to radiance field with a globally constant Eikonal regularization, we improve with a ray-wise weighting factor.
Our proposed textitRaNeuS are extensively evaluated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-14T07:54:25Z) - Spatial Annealing Smoothing for Efficient Few-shot Neural Rendering [106.0057551634008]
We introduce an accurate and efficient few-shot neural rendering method named Spatial Annealing smoothing regularized NeRF (SANeRF)
By adding merely one line of code, SANeRF delivers superior rendering quality and much faster reconstruction speed compared to current few-shot NeRF methods.
arXiv Detail & Related papers (2024-06-12T02:48:52Z) - MonoPatchNeRF: Improving Neural Radiance Fields with Patch-based Monocular Guidance [29.267039546199094]
In this paper, we aim to create 3D models that provide accurate geometry and view synthesis.
We propose a patch-based approach that effectively leverages monocular surface normal and relative depth predictions.
Trials show 4x the performance of RegNeRF and 8x that of FreeNeRF on average F1@2cm for ETH3D MVS benchmark.
arXiv Detail & Related papers (2024-04-12T05:43:10Z) - SGCNeRF: Few-Shot Neural Rendering via Sparse Geometric Consistency Guidance [106.0057551634008]
FreeNeRF attempts to overcome this limitation by integrating implicit geometry regularization.
New study introduces a novel feature matching based sparse geometry regularization module.
module excels in pinpointing high-frequency keypoints, thereby safeguarding the integrity of fine details.
arXiv Detail & Related papers (2024-04-01T08:37:57Z) - NeuV-SLAM: Fast Neural Multiresolution Voxel Optimization for RGBD Dense
SLAM [5.709880146357355]
We introduce NeuV-SLAM, a novel simultaneous localization and mapping pipeline based on neural multiresolution voxels.
NeuV-SLAM is characterized by ultra-fast convergence and incremental expansion capabilities.
arXiv Detail & Related papers (2024-02-03T04:26:35Z) - Fast Monocular Scene Reconstruction with Global-Sparse Local-Dense Grids [84.90863397388776]
We propose to directly use signed distance function (SDF) in sparse voxel block grids for fast and accurate scene reconstruction without distances.
Our globally sparse and locally dense data structure exploits surfaces' spatial sparsity, enables cache-friendly queries, and allows direct extensions to multi-modal data.
Experiments show that our approach is 10x faster in training and 100x faster in rendering while achieving comparable accuracy to state-of-the-art neural implicit methods.
arXiv Detail & Related papers (2023-05-22T16:50:19Z) - GraphReg: Dynamical Point Cloud Registration with Geometry-aware Graph
Signal Processing [0.0]
This study presents a high-accuracy, efficient, and physically induced method for 3D point cloud registration.
We explore geometry aware rigid-body dynamics to regulate the particle (point) motion, which results in more precise and robust registration.
Results demonstrate that our proposed method outperforms state-of-the-art approaches in terms of accuracy and is more suitable for registering large-scale point clouds.
arXiv Detail & Related papers (2023-02-02T14:06:46Z) - Few-shot Non-line-of-sight Imaging with Signal-surface Collaborative
Regularization [18.466941045530408]
Non-line-of-sight imaging technique aims to reconstruct targets from multiply reflected light.
We propose a signal-surface collaborative regularization framework that provides noise-robust reconstructions with a minimal number of measurements.
Our approach has great potential in real-time non-line-of-sight imaging applications such as rescue operations and autonomous driving.
arXiv Detail & Related papers (2022-11-21T11:19:20Z) - Direct Voxel Grid Optimization: Super-fast Convergence for Radiance
Fields Reconstruction [42.3230709881297]
We present a super-fast convergence approach to reconstructing the per-scene radiance field from a set of images.
Our approach achieves NeRF-comparable quality and converges rapidly from scratch in less than 15 minutes with a single GPU.
arXiv Detail & Related papers (2021-11-22T14:02:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.