Distilled Low Rank Neural Radiance Field with Quantization for Light
Field Compression
- URL: http://arxiv.org/abs/2208.00164v3
- Date: Thu, 21 Sep 2023 07:28:24 GMT
- Title: Distilled Low Rank Neural Radiance Field with Quantization for Light
Field Compression
- Authors: Jinglei Shi and Christine Guillemot
- Abstract summary: We propose a Quantized Distilled Low-Rank Neural Radiance Field (QDLR-NeRF) representation for the task of light field compression.
Our proposed method learns an implicit scene representation in the form of a Neural Radiance Field (NeRF), which also enables view synthesis.
Experimental results show that our proposed method yields better compression efficiency compared to state-of-the-art methods.
- Score: 33.08737425706558
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose in this paper a Quantized Distilled Low-Rank Neural Radiance Field
(QDLR-NeRF) representation for the task of light field compression. While
existing compression methods encode the set of light field sub-aperture images,
our proposed method learns an implicit scene representation in the form of a
Neural Radiance Field (NeRF), which also enables view synthesis. To reduce its
size, the model is first learned under a Low-Rank (LR) constraint using a
Tensor Train (TT) decomposition within an Alternating Direction Method of
Multipliers (ADMM) optimization framework. To further reduce the model's size,
the components of the tensor train decomposition need to be quantized. However,
simultaneously considering the optimization of the NeRF model with both the
low-rank constraint and rate-constrained weight quantization is challenging. To
address this difficulty, we introduce a network distillation operation that
separates the low-rank approximation and the weight quantization during network
training. The information from the initial LR-constrained NeRF (LR-NeRF) is
distilled into a model of much smaller dimension (DLR-NeRF) based on the TT
decomposition of the LR-NeRF. We then learn an optimized global codebook to
quantize all TT components, producing the final QDLR-NeRF. Experimental results
show that our proposed method yields better compression efficiency compared to
state-of-the-art methods, and it additionally has the advantage of allowing the
synthesis of any light field view with high quality.
Related papers
- Rate-aware Compression for NeRF-based Volumetric Video [21.372568857027748]
radiance fields (NeRF) have advanced the development of 3D volumetric video technology.
Existing solutions compress NeRF representations after the training stage, leading to a separation between representation training and compression.
In this paper, we try to directly learn a compact NeRF representation for volumetric video in the training stage based on the proposed rate-aware compression framework.
arXiv Detail & Related papers (2024-11-08T04:29:14Z) - FreqINR: Frequency Consistency for Implicit Neural Representation with Adaptive DCT Frequency Loss [5.349799154834945]
This paper introduces Frequency Consistency for Implicit Neural Representation (FreqINR), an innovative Arbitrary-scale Super-resolution method.
During training, we employ Adaptive Discrete Cosine Transform Frequency Loss (ADFL) to minimize the frequency gap between HR and ground-truth images.
During inference, we extend the receptive field to preserve spectral coherence between low-resolution (LR) and ground-truth images.
arXiv Detail & Related papers (2024-08-25T03:53:17Z) - 2DQuant: Low-bit Post-Training Quantization for Image Super-Resolution [83.09117439860607]
Low-bit quantization has become widespread for compressing image super-resolution (SR) models for edge deployment.
It is notorious that low-bit quantization degrades the accuracy of SR models compared to their full-precision (FP) counterparts.
We present a dual-stage low-bit post-training quantization (PTQ) method for image super-resolution, namely 2DQuant, which achieves efficient and accurate SR under low-bit quantization.
arXiv Detail & Related papers (2024-06-10T06:06:11Z) - Frequency-Aware Re-Parameterization for Over-Fitting Based Image
Compression [12.725194101094711]
Over-fitting-based image compression requires weights compactness for compression and fast convergence for practical use.
This paper presents a simple re- parameterization method to train CNNs with reduced weights storage and accelerated convergence.
The proposed method is verified with extensive experiments of over-fitting-based image restoration on various datasets, achieving up to -46.12% BD-rate on top of HEIF with only 200 iterations.
arXiv Detail & Related papers (2023-10-12T06:32:12Z) - BID-NeRF: RGB-D image pose estimation with inverted Neural Radiance
Fields [0.0]
We aim to improve the Inverted Neural Radiance Fields (iNeRF) algorithm which defines the image pose estimation problem as a NeRF based iterative linear optimization.
NeRFs are novel neural space representation models that can synthesize photorealistic novel views of real-world scenes or objects.
arXiv Detail & Related papers (2023-10-05T14:27:06Z) - R2L: Distilling Neural Radiance Field to Neural Light Field for
Efficient Novel View Synthesis [76.07010495581535]
Rendering a single pixel requires querying the Neural Radiance Field network hundreds of times.
NeLF presents a more straightforward representation over NeRF in novel view.
We show the key to successfully learning a deep NeLF network is to have sufficient data.
arXiv Detail & Related papers (2022-03-31T17:57:05Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z) - Regularization by Denoising Sub-sampled Newton Method for Spectral CT
Multi-Material Decomposition [78.37855832568569]
We propose to solve a model-based maximum-a-posterior problem to reconstruct multi-materials images with application to spectral CT.
In particular, we propose to solve a regularized optimization problem based on a plug-in image-denoising function.
We show numerical and experimental results for spectral CT materials decomposition.
arXiv Detail & Related papers (2021-03-25T15:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.