HollowNeRF: Pruning Hashgrid-Based NeRFs with Trainable Collision
Mitigation
- URL: http://arxiv.org/abs/2308.10122v1
- Date: Sat, 19 Aug 2023 22:28:17 GMT
- Title: HollowNeRF: Pruning Hashgrid-Based NeRFs with Trainable Collision
Mitigation
- Authors: Xiufeng Xie, Riccardo Gherardi, Zhihong Pan, Stephen Huang
- Abstract summary: We propose a novel compression solution for hashgrid-based Neural radiance fields (NeRF)
HollowNeRF automatically sparsifies the feature grid during the training phase.
Our method delivers comparable rendering quality to Instant-NGP, while utilizing just 31% of the parameters.
- Score: 6.335245465042035
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural radiance fields (NeRF) have garnered significant attention, with
recent works such as Instant-NGP accelerating NeRF training and evaluation
through a combination of hashgrid-based positional encoding and neural
networks. However, effectively leveraging the spatial sparsity of 3D scenes
remains a challenge. To cull away unnecessary regions of the feature grid,
existing solutions rely on prior knowledge of object shape or periodically
estimate object shape during training by repeated model evaluations, which are
costly and wasteful.
To address this issue, we propose HollowNeRF, a novel compression solution
for hashgrid-based NeRF which automatically sparsifies the feature grid during
the training phase. Instead of directly compressing dense features, HollowNeRF
trains a coarse 3D saliency mask that guides efficient feature pruning, and
employs an alternating direction method of multipliers (ADMM) pruner to
sparsify the 3D saliency mask during training. By exploiting the sparsity in
the 3D scene to redistribute hash collisions, HollowNeRF improves rendering
quality while using a fraction of the parameters of comparable state-of-the-art
solutions, leading to a better cost-accuracy trade-off. Our method delivers
comparable rendering quality to Instant-NGP, while utilizing just 31% of the
parameters. In addition, our solution can achieve a PSNR accuracy gain of up to
1dB using only 56% of the parameters.
Related papers
- DM3D: Distortion-Minimized Weight Pruning for Lossless 3D Object Detection [42.07920565812081]
We propose a novel post-training weight pruning scheme for 3D object detection.
It determines redundant parameters in the pretrained model that lead to minimal distortion in both locality and confidence.
This framework aims to minimize detection distortion of network output to maximally maintain detection precision.
arXiv Detail & Related papers (2024-07-02T09:33:32Z) - Neural NeRF Compression [19.853882143024]
Recent NeRFs utilize feature grids to improve rendering quality and speed.
These representations introduce significant storage overhead.
This paper presents a novel method for efficiently compressing a grid-based NeRF model.
arXiv Detail & Related papers (2024-06-13T09:12:26Z) - Spatial Annealing Smoothing for Efficient Few-shot Neural Rendering [106.0057551634008]
We introduce an accurate and efficient few-shot neural rendering method named Spatial Annealing smoothing regularized NeRF (SANeRF)
By adding merely one line of code, SANeRF delivers superior rendering quality and much faster reconstruction speed compared to current few-shot NeRF methods.
arXiv Detail & Related papers (2024-06-12T02:48:52Z) - N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation [51.346733271166926]
Mesh2NeRF is an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks.
We validate the effectiveness of Mesh2NeRF across various tasks.
arXiv Detail & Related papers (2024-03-28T11:22:53Z) - Fast Training of Diffusion Transformer with Extreme Masking for 3D Point
Clouds Generation [64.99362684909914]
We propose FastDiT-3D, a novel masked diffusion transformer tailored for efficient 3D point cloud generation.
We also propose a novel voxel-aware masking strategy to adaptively aggregate background/foreground information from voxelized point clouds.
Our method achieves state-of-the-art performance with an extreme masking ratio of nearly 99%.
arXiv Detail & Related papers (2023-12-12T12:50:33Z) - CAwa-NeRF: Instant Learning of Compression-Aware NeRF Features [0.0]
In this paper, we introduce instant learning of compression-aware NeRF features (CAwa-NeRF)
Our proposed instant learning pipeline can achieve impressive results on different kinds of static scenes.
In particular, for single object masked background scenes CAwa-NeRF compresses the feature grids down to 6% (1.2 MB) of the original size without any loss in the PSNR (33 dB) or down to 2.4% (0.53 MB) with a slight virtual loss (32.31 dB)
arXiv Detail & Related papers (2023-10-23T08:40:44Z) - VQ-NeRF: Vector Quantization Enhances Implicit Neural Representations [25.88881764546414]
VQ-NeRF is an efficient pipeline for enhancing implicit neural representations via vector quantization.
We present an innovative multi-scale NeRF sampling scheme that concurrently optimize the NeRF model at both compressed and original scales.
We incorporate a semantic loss function to improve the geometric fidelity and semantic coherence of our 3D reconstructions.
arXiv Detail & Related papers (2023-10-23T01:41:38Z) - MF-NeRF: Memory Efficient NeRF with Mixed-Feature Hash Table [62.164549651134465]
We propose MF-NeRF, a memory-efficient NeRF framework that employs a Mixed-Feature hash table to improve memory efficiency and reduce training time while maintaining reconstruction quality.
Our experiments with state-of-the-art Instant-NGP, TensoRF, and DVGO, indicate our MF-NeRF could achieve the fastest training time on the same GPU hardware with similar or even higher reconstruction quality.
arXiv Detail & Related papers (2023-04-25T05:44:50Z) - EfficientNeRF: Efficient Neural Radiance Fields [63.76830521051605]
We present EfficientNeRF as an efficient NeRF-based method to represent 3D scene and synthesize novel-view images.
Our method can reduce over 88% of training time, reach rendering speed of over 200 FPS, while still achieving competitive accuracy.
arXiv Detail & Related papers (2022-06-02T05:36:44Z) - Point-NeRF: Point-based Neural Radiance Fields [39.38262052015925]
Point-NeRF uses neural 3D point clouds, with associated neural features, to model a radiance field.
It can be rendered efficiently by aggregating neural point features near scene surfaces, in a ray marching-based rendering pipeline.
Point-NeRF can be combined with other 3D reconstruction methods and handles the errors and outliers via a novel pruning and growing mechanism.
arXiv Detail & Related papers (2022-01-21T18:59:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.