SPC-NeRF: Spatial Predictive Compression for Voxel Based Radiance Field
- URL: http://arxiv.org/abs/2402.16366v1
- Date: Mon, 26 Feb 2024 07:40:45 GMT
- Title: SPC-NeRF: Spatial Predictive Compression for Voxel Based Radiance Field
- Authors: Zetian Song, Wenhong Duan, Yuhuai Zhang, Shiqi Wang, Siwei Ma, Wen Gao
- Abstract summary: We propose SPC-NeRF, a novel framework applying spatial predictive coding in EVG compression.
Our method can achieve 32% bit saving compared to the state-of-the-art method VQRF.
- Score: 41.33347056627581
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Representing the Neural Radiance Field (NeRF) with the explicit voxel grid
(EVG) is a promising direction for improving NeRFs. However, the EVG
representation is not efficient for storage and transmission because of the
terrific memory cost. Current methods for compressing EVG mainly inherit the
methods designed for neural network compression, such as pruning and
quantization, which do not take full advantage of the spatial correlation of
voxels. Inspired by prosperous digital image compression techniques, this paper
proposes SPC-NeRF, a novel framework applying spatial predictive coding in EVG
compression. The proposed framework can remove spatial redundancy efficiently
for better compression performance.Moreover, we model the bitrate and design a
novel form of the loss function, where we can jointly optimize compression
ratio and distortion to achieve higher coding efficiency. Extensive experiments
demonstrate that our method can achieve 32% bit saving compared to the
state-of-the-art method VQRF on multiple representative test datasets, with
comparable training time.
Related papers
- Rate-aware Compression for NeRF-based Volumetric Video [21.372568857027748]
radiance fields (NeRF) have advanced the development of 3D volumetric video technology.
Existing solutions compress NeRF representations after the training stage, leading to a separation between representation training and compression.
In this paper, we try to directly learn a compact NeRF representation for volumetric video in the training stage based on the proposed rate-aware compression framework.
arXiv Detail & Related papers (2024-11-08T04:29:14Z) - Neural NeRF Compression [19.853882143024]
Recent NeRFs utilize feature grids to improve rendering quality and speed.
These representations introduce significant storage overhead.
This paper presents a novel method for efficiently compressing a grid-based NeRF model.
arXiv Detail & Related papers (2024-06-13T09:12:26Z) - Once-for-All: Controllable Generative Image Compression with Dynamic Granularity Adaption [57.056311855630916]
We propose a Controllable Generative Image Compression framework, Control-GIC.
It is capable of fine-grained adaption across a broad spectrum while ensuring high-fidelity and generality compression.
We develop a conditional conditionalization that can trace back to historic encoded multi-granularity representations.
arXiv Detail & Related papers (2024-06-02T14:22:09Z) - Convolutional variational autoencoders for secure lossy image compression in remote sensing [47.75904906342974]
This study investigates image compression based on convolutional variational autoencoders (CVAE)
CVAEs have been demonstrated to outperform conventional compression methods such as JPEG2000 by a substantial margin on compression benchmark datasets.
arXiv Detail & Related papers (2024-04-03T15:17:29Z) - Learned Lossless Compression for JPEG via Frequency-Domain Prediction [50.20577108662153]
We propose a novel framework for learned lossless compression of JPEG images.
To enable learning in the frequency domain, DCT coefficients are partitioned into groups to utilize implicit local redundancy.
An autoencoder-like architecture is designed based on the weight-shared blocks to realize entropy modeling of grouped DCT coefficients.
arXiv Detail & Related papers (2023-03-05T13:15:28Z) - Modeling Image Quantization Tradeoffs for Optimal Compression [0.0]
Lossy compression algorithms target tradeoffs by quantizating high frequency data to increase compression rates.
We propose a new method of optimizing quantization tables using Deep Learning and a minimax loss function.
arXiv Detail & Related papers (2021-12-14T07:35:22Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Compression-aware Projection with Greedy Dimension Reduction for
Convolutional Neural Network Activations [3.6188659868203388]
We propose a compression-aware projection system to improve the trade-off between classification accuracy and compression ratio.
Our test results show that the proposed methods effectively reduce 2.91x5.97x memory access with negligible accuracy drop on MobileNetV2/ResNet18/VGG16.
arXiv Detail & Related papers (2021-10-17T14:02:02Z) - An Efficient Statistical-based Gradient Compression Technique for
Distributed Training Systems [77.88178159830905]
Sparsity-Inducing Distribution-based Compression (SIDCo) is a threshold-based sparsification scheme that enjoys similar threshold estimation quality to deep gradient compression (DGC)
Our evaluation shows SIDCo speeds up training by up to 41:7%, 7:6%, and 1:9% compared to the no-compression baseline, Topk, and DGC compressors, respectively.
arXiv Detail & Related papers (2021-01-26T13:06:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.