Explicit-NeRF-QA: A Quality Assessment Database for Explicit NeRF Model Compression
- URL: http://arxiv.org/abs/2407.08165v2
- Date: Thu, 18 Jul 2024 15:52:26 GMT
- Title: Explicit-NeRF-QA: A Quality Assessment Database for Explicit NeRF Model Compression
- Authors: Yuke Xing, Qi Yang, Kaifa Yang, Yilin Xu, Zhu Li,
- Abstract summary: We construct a new dataset, called Explicit-NeRF-QA, to address the challenge of the NeRF compression study.
We use 22 3D objects with diverse geometries, textures, and material complexities to train four typical explicit NeRF models.
A subjective experiment with lab environment is conducted to collect subjective scores from 21 viewers.
- Score: 10.469092315640696
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, Neural Radiance Fields (NeRF) have demonstrated significant advantages in representing and synthesizing 3D scenes. Explicit NeRF models facilitate the practical NeRF applications with faster rendering speed, and also attract considerable attention in NeRF compression due to its huge storage cost. To address the challenge of the NeRF compression study, in this paper, we construct a new dataset, called Explicit-NeRF-QA. We use 22 3D objects with diverse geometries, textures, and material complexities to train four typical explicit NeRF models across five parameter levels. Lossy compression is introduced during the model generation, pivoting the selection of key parameters such as hash table size for InstantNGP and voxel grid resolution for Plenoxels. By rendering NeRF samples to processed video sequences (PVS), a large scale subjective experiment with lab environment is conducted to collect subjective scores from 21 viewers. The diversity of content, accuracy of mean opinion scores (MOS), and characteristics of NeRF distortion are comprehensively presented, establishing the heterogeneity of the proposed dataset. The state-of-the-art objective metrics are tested in the new dataset. Best Person correlation, which is around 0.85, is collected from the full-reference objective metric. All tested no-reference metrics report very poor results with 0.4 to 0.6 correlations, demonstrating the need for further development of more robust no-reference metrics. The dataset, including NeRF samples, source 3D objects, multiview images for NeRF generation, PVSs, MOS, is made publicly available at the following location: https://github.com/LittlericeChloe/Explicit_NeRF_QA.
Related papers
- How Far Can We Compress Instant-NGP-Based NeRF? [45.88543996963832]
We introduce the Context-based NeRF Compression (CNC) framework to provide a storage-friendly NeRF representation.
We exploit hash collision and occupancy grids as strong prior knowledge for better context modeling.
We attain 86.7% and 82.3% storage size reduction against the SOTA NeRF compression method BiRF.
arXiv Detail & Related papers (2024-06-06T14:16:03Z) - NeRF-DetS: Enhancing Multi-View 3D Object Detection with Sampling-adaptive Network of Continuous NeRF-based Representation [60.47114985993196]
NeRF-Det unifies the tasks of novel view arithmetic and 3D perception.
We introduce a novel 3D perception network structure, NeRF-DetS.
NeRF-DetS outperforms competitive NeRF-Det on the ScanNetV2 dataset.
arXiv Detail & Related papers (2024-04-22T06:59:03Z) - NeRFmentation: NeRF-based Augmentation for Monocular Depth Estimation [44.22677259411607]
We propose a NeRF-based data augmentation pipeline to introduce synthetic data with more diverse viewing directions into training datasets.
We apply our technique in conjunction with three state-of-the-art MDE architectures on the popular autonomous driving dataset, KITTI.
arXiv Detail & Related papers (2024-01-08T09:50:54Z) - SANeRF-HQ: Segment Anything for NeRF in High Quality [61.77762568224097]
We introduce the Segment Anything for NeRF in High Quality (SANeRF-HQ) to achieve high-quality 3D segmentation of any target object in a given scene.
We employ density field and RGB similarity to enhance the accuracy of segmentation boundary during the aggregation.
arXiv Detail & Related papers (2023-12-03T23:09:38Z) - ScanNeRF: a Scalable Benchmark for Neural Radiance Fields [21.973450071630676]
ScanNeRF is a dataset characterized by several train/val/test splits aimed at benchmarking the performance of modern NeRF methods under different conditions.
We evaluate three cutting-edge NeRF variants on it to highlight their strengths and weaknesses.
The dataset is available on our project page, together with an online benchmark to foster the development of better and better NeRFs.
arXiv Detail & Related papers (2022-11-24T19:00:02Z) - NeRF-RPN: A general framework for object detection in NeRFs [54.54613914831599]
NeRF-RPN aims to detect all bounding boxes of objects in a scene.
NeRF-RPN is a general framework and can be applied to detect objects without class labels.
arXiv Detail & Related papers (2022-11-21T17:02:01Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.