JointRF: End-to-End Joint Optimization for Dynamic Neural Radiance Field Representation and Compression
- URL: http://arxiv.org/abs/2405.14452v2
- Date: Sat, 8 Jun 2024 06:12:05 GMT
- Title: JointRF: End-to-End Joint Optimization for Dynamic Neural Radiance Field Representation and Compression
- Authors: Zihan Zheng, Houqiang Zhong, Qiang Hu, Xiaoyun Zhang, Li Song, Ya Zhang, Yanfeng Wang,
- Abstract summary: We propose a novel end-to-end joint optimization scheme of dynamic NeRF representation and compression, called JointRF.
JointRF achieves significantly improved quality and compression efficiency against the previous methods.
- Score: 39.403294185116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Field (NeRF) excels in photo-realistically static scenes, inspiring numerous efforts to facilitate volumetric videos. However, rendering dynamic and long-sequence radiance fields remains challenging due to the significant data required to represent volumetric videos. In this paper, we propose a novel end-to-end joint optimization scheme of dynamic NeRF representation and compression, called JointRF, thus achieving significantly improved quality and compression efficiency against the previous methods. Specifically, JointRF employs a compact residual feature grid and a coefficient feature grid to represent the dynamic NeRF. This representation handles large motions without compromising quality while concurrently diminishing temporal redundancy. We also introduce a sequential feature compression subnetwork to further reduce spatial-temporal redundancy. Finally, the representation and compression subnetworks are end-to-end trained combined within the JointRF. Extensive experiments demonstrate that JointRF can achieve superior compression performance across various datasets.
Related papers
- Rate-aware Compression for NeRF-based Volumetric Video [21.372568857027748]
radiance fields (NeRF) have advanced the development of 3D volumetric video technology.
Existing solutions compress NeRF representations after the training stage, leading to a separation between representation training and compression.
In this paper, we try to directly learn a compact NeRF representation for volumetric video in the training stage based on the proposed rate-aware compression framework.
arXiv Detail & Related papers (2024-11-08T04:29:14Z) - Few-shot NeRF by Adaptive Rendering Loss Regularization [78.50710219013301]
Novel view synthesis with sparse inputs poses great challenges to Neural Radiance Field (NeRF)
Recent works demonstrate that the frequency regularization of Positional rendering can achieve promising results for few-shot NeRF.
We propose Adaptive Rendering loss regularization for few-shot NeRF, dubbed AR-NeRF.
arXiv Detail & Related papers (2024-10-23T13:05:26Z) - Neural NeRF Compression [19.853882143024]
Recent NeRFs utilize feature grids to improve rendering quality and speed.
These representations introduce significant storage overhead.
This paper presents a novel method for efficiently compressing a grid-based NeRF model.
arXiv Detail & Related papers (2024-06-13T09:12:26Z) - SGCNeRF: Few-Shot Neural Rendering via Sparse Geometric Consistency Guidance [106.0057551634008]
FreeNeRF attempts to overcome this limitation by integrating implicit geometry regularization.
New study introduces a novel feature matching based sparse geometry regularization module.
module excels in pinpointing high-frequency keypoints, thereby safeguarding the integrity of fine details.
arXiv Detail & Related papers (2024-04-01T08:37:57Z) - NeRF-VPT: Learning Novel View Representations with Neural Radiance
Fields via View Prompt Tuning [63.39461847093663]
We propose NeRF-VPT, an innovative method for novel view synthesis to address these challenges.
Our proposed NeRF-VPT employs a cascading view prompt tuning paradigm, wherein RGB information gained from preceding rendering outcomes serves as instructive visual prompts for subsequent rendering stages.
NeRF-VPT only requires sampling RGB data from previous stage renderings as priors at each training stage, without relying on extra guidance or complex techniques.
arXiv Detail & Related papers (2024-03-02T22:08:10Z) - Efficient Dynamic-NeRF Based Volumetric Video Coding with Rate Distortion Optimization [19.90293875755272]
NeRF has remarkable potential in volumetric video compression thanks to its simple representation and powerful 3D modeling capabilities.
ReRF separates the modeling from compression process, resulting in suboptimal compression efficiency.
In this paper, we propose a volumetric video compression method based on dynamic NeRF in a more compact manner.
arXiv Detail & Related papers (2024-02-02T13:03:20Z) - TeTriRF: Temporal Tri-Plane Radiance Fields for Efficient Free-Viewpoint
Video [47.82392246786268]
Temporal Tri-Plane Radiance Fields (TeTriRF) is a novel technology that significantly reduces the storage size for Free-Viewpoint Video (FVV)
TeTriRF introduces a hybrid representation with tri-planes and voxel grids to support scaling up to long-duration sequences and scenes.
We propose a group training scheme tailored to achieving high training efficiency and yielding temporally consistent, low-entropy scene representations.
arXiv Detail & Related papers (2023-12-10T23:00:24Z) - Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos [69.22032459870242]
We present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time free-view rendering on long-duration dynamic scenes.
We show such a strategy can handle large motions without sacrificing quality.
Based on ReRF, we design a special FVV that achieves three orders of magnitudes compression rate and provides a companion ReRF player to support online streaming of long-duration FVVs of dynamic scenes.
arXiv Detail & Related papers (2023-04-10T08:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.