Masked Wavelet Representation for Compact Neural Radiance Fields
- URL: http://arxiv.org/abs/2212.09069v2
- Date: Tue, 21 Mar 2023 10:23:40 GMT
- Title: Masked Wavelet Representation for Compact Neural Radiance Fields
- Authors: Daniel Rho, Byeonghyeon Lee, Seungtae Nam, Joo Chan Lee, Jong Hwan Ko,
Eunbyung Park
- Abstract summary: Using a multi-layer perceptron to represent a 3D scene or object requires enormous computational resources and time.
We present a method to reduce the size without compromising the advantages of having additional data structures.
With our proposed mask and compression pipeline, we achieved state-of-the-art performance within a memory budget of 2 MB.
- Score: 5.279919461008267
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural radiance fields (NeRF) have demonstrated the potential of
coordinate-based neural representation (neural fields or implicit neural
representation) in neural rendering. However, using a multi-layer perceptron
(MLP) to represent a 3D scene or object requires enormous computational
resources and time. There have been recent studies on how to reduce these
computational inefficiencies by using additional data structures, such as grids
or trees. Despite the promising performance, the explicit data structure
necessitates a substantial amount of memory. In this work, we present a method
to reduce the size without compromising the advantages of having additional
data structures. In detail, we propose using the wavelet transform on
grid-based neural fields. Grid-based neural fields are for fast convergence,
and the wavelet transform, whose efficiency has been demonstrated in
high-performance standard codecs, is to improve the parameter efficiency of
grids. Furthermore, in order to achieve a higher sparsity of grid coefficients
while maintaining reconstruction quality, we present a novel trainable masking
approach. Experimental results demonstrate that non-spatial grid coefficients,
such as wavelet coefficients, are capable of attaining a higher level of
sparsity than spatial grid coefficients, resulting in a more compact
representation. With our proposed mask and compression pipeline, we achieved
state-of-the-art performance within a memory budget of 2 MB. Our code is
available at https://github.com/daniel03c1/masked_wavelet_nerf.
Related papers
- Neural NeRF Compression [19.853882143024]
Recent NeRFs utilize feature grids to improve rendering quality and speed.
These representations introduce significant storage overhead.
This paper presents a novel method for efficiently compressing a grid-based NeRF model.
arXiv Detail & Related papers (2024-06-13T09:12:26Z) - N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - SHACIRA: Scalable HAsh-grid Compression for Implicit Neural
Representations [46.01969382873856]
Implicit Neural Representations (INR) or neural fields have emerged as a popular framework to encode multimedia signals.
We propose SHACIRA, a framework for compressing such feature grids with no additional post-hoc pruning/quantization stages.
Our approach outperforms existing INR approaches without the need for any large datasets or domain-specifics.
arXiv Detail & Related papers (2023-09-27T17:59:48Z) - MF-NeRF: Memory Efficient NeRF with Mixed-Feature Hash Table [62.164549651134465]
We propose MF-NeRF, a memory-efficient NeRF framework that employs a Mixed-Feature hash table to improve memory efficiency and reduce training time while maintaining reconstruction quality.
Our experiments with state-of-the-art Instant-NGP, TensoRF, and DVGO, indicate our MF-NeRF could achieve the fastest training time on the same GPU hardware with similar or even higher reconstruction quality.
arXiv Detail & Related papers (2023-04-25T05:44:50Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Efficient bit encoding of neural networks for Fock states [77.34726150561087]
The complexity of the neural network scales only with the number of bit-encoded neurons rather than the maximum boson number.
In the high occupation regime its information compression efficiency is shown to surpass even maximally optimized density matrix implementations.
arXiv Detail & Related papers (2021-03-15T11:24:40Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.