How Far Can We Compress Instant-NGP-Based NeRF?
- URL: http://arxiv.org/abs/2406.04101v1
- Date: Thu, 6 Jun 2024 14:16:03 GMT
- Title: How Far Can We Compress Instant-NGP-Based NeRF?
- Authors: Yihang Chen, Qianyi Wu, Mehrtash Harandi, Jianfei Cai,
- Abstract summary: We introduce the Context-based NeRF Compression (CNC) framework to provide a storage-friendly NeRF representation.
We exploit hash collision and occupancy grids as strong prior knowledge for better context modeling.
We attain 86.7% and 82.3% storage size reduction against the SOTA NeRF compression method BiRF.
- Score: 45.88543996963832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, Neural Radiance Field (NeRF) has demonstrated remarkable capabilities in representing 3D scenes. To expedite the rendering process, learnable explicit representations have been introduced for combination with implicit NeRF representation, which however results in a large storage space requirement. In this paper, we introduce the Context-based NeRF Compression (CNC) framework, which leverages highly efficient context models to provide a storage-friendly NeRF representation. Specifically, we excavate both level-wise and dimension-wise context dependencies to enable probability prediction for information entropy reduction. Additionally, we exploit hash collision and occupancy grids as strong prior knowledge for better context modeling. To the best of our knowledge, we are the first to construct and exploit context models for NeRF compression. We achieve a size reduction of 100$\times$ and 70$\times$ with improved fidelity against the baseline Instant-NGP on Synthesic-NeRF and Tanks and Temples datasets, respectively. Additionally, we attain 86.7\% and 82.3\% storage size reduction against the SOTA NeRF compression method BiRF. Our code is available here: https://github.com/YihangChen-ee/CNC.
Related papers
- Rate-aware Compression for NeRF-based Volumetric Video [21.372568857027748]
radiance fields (NeRF) have advanced the development of 3D volumetric video technology.
Existing solutions compress NeRF representations after the training stage, leading to a separation between representation training and compression.
In this paper, we try to directly learn a compact NeRF representation for volumetric video in the training stage based on the proposed rate-aware compression framework.
arXiv Detail & Related papers (2024-11-08T04:29:14Z) - Explicit-NeRF-QA: A Quality Assessment Database for Explicit NeRF Model Compression [10.469092315640696]
We construct a new dataset, called Explicit-NeRF-QA, to address the challenge of the NeRF compression study.
We use 22 3D objects with diverse geometries, textures, and material complexities to train four typical explicit NeRF models.
A subjective experiment with lab environment is conducted to collect subjective scores from 21 viewers.
arXiv Detail & Related papers (2024-07-11T04:02:05Z) - Neural NeRF Compression [19.853882143024]
Recent NeRFs utilize feature grids to improve rendering quality and speed.
These representations introduce significant storage overhead.
This paper presents a novel method for efficiently compressing a grid-based NeRF model.
arXiv Detail & Related papers (2024-06-13T09:12:26Z) - HAC: Hash-grid Assisted Context for 3D Gaussian Splatting Compression [55.6351304553003]
3D Gaussian Splatting (3DGS) has emerged as a promising framework for novel view synthesis.
We propose a Hash-grid Assisted Context (HAC) framework for highly compact 3DGS representation.
Our work is the pioneer to explore context-based compression for 3DGS representation, resulting in a remarkable size reduction of over $75times$ compared to vanilla 3DGS.
arXiv Detail & Related papers (2024-03-21T16:28:58Z) - "Lossless" Compression of Deep Neural Networks: A High-dimensional
Neural Tangent Kernel Approach [49.744093838327615]
We provide a novel compression approach to wide and fully-connected emphdeep neural nets.
Experiments on both synthetic and real-world data are conducted to support the advantages of the proposed compression scheme.
arXiv Detail & Related papers (2024-03-01T03:46:28Z) - CAwa-NeRF: Instant Learning of Compression-Aware NeRF Features [0.0]
In this paper, we introduce instant learning of compression-aware NeRF features (CAwa-NeRF)
Our proposed instant learning pipeline can achieve impressive results on different kinds of static scenes.
In particular, for single object masked background scenes CAwa-NeRF compresses the feature grids down to 6% (1.2 MB) of the original size without any loss in the PSNR (33 dB) or down to 2.4% (0.53 MB) with a slight virtual loss (32.31 dB)
arXiv Detail & Related papers (2023-10-23T08:40:44Z) - NAS-NeRF: Generative Neural Architecture Search for Neural Radiance
Fields [75.28756910744447]
Neural radiance fields (NeRFs) enable high-quality novel view synthesis, but their high computational complexity limits deployability.
We introduce NAS-NeRF, a generative neural architecture search strategy that generates compact, scene-specialized NeRF architectures.
Our method incorporates constraints on target metrics and budgets to guide the search towards architectures tailored for each scene.
arXiv Detail & Related papers (2023-09-25T17:04:30Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - DReg-NeRF: Deep Registration for Neural Radiance Fields [66.69049158826677]
We propose DReg-NeRF to solve the NeRF registration problem on object-centric annotated scenes without human intervention.
Our proposed method beats the SOTA point cloud registration methods by a large margin.
arXiv Detail & Related papers (2023-08-18T08:37:49Z) - Compressible-composable NeRF via Rank-residual Decomposition [21.92736190195887]
Neural Radiance Field (NeRF) has emerged as a compelling method to represent 3D objects and scenes for photo-realistic rendering.
We present a neural representation that enables efficient and convenient manipulation of models.
Our method is able to achieve comparable rendering quality to state-of-the-art methods, while enabling extra capability of compression and composition.
arXiv Detail & Related papers (2022-05-30T06:18:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.