Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos
- URL: http://arxiv.org/abs/2304.04452v2
- Date: Thu, 15 Jun 2023 14:16:57 GMT
- Title: Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos
- Authors: Liao Wang, Qiang Hu, Qihan He, Ziyu Wang, Jingyi Yu, Tinne Tuytelaars,
Lan Xu, Minye Wu
- Abstract summary: We present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time free-view rendering on long-duration dynamic scenes.
We show such a strategy can handle large motions without sacrificing quality.
Based on ReRF, we design a special FVV that achieves three orders of magnitudes compression rate and provides a companion ReRF player to support online streaming of long-duration FVVs of dynamic scenes.
- Score: 69.22032459870242
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The success of the Neural Radiance Fields (NeRFs) for modeling and free-view
rendering static objects has inspired numerous attempts on dynamic scenes.
Current techniques that utilize neural rendering for facilitating free-view
videos (FVVs) are restricted to either offline rendering or are capable of
processing only brief sequences with minimal motion. In this paper, we present
a novel technique, Residual Radiance Field or ReRF, as a highly compact neural
representation to achieve real-time FVV rendering on long-duration dynamic
scenes. ReRF explicitly models the residual information between adjacent
timestamps in the spatial-temporal feature space, with a global
coordinate-based tiny MLP as the feature decoder. Specifically, ReRF employs a
compact motion grid along with a residual feature grid to exploit inter-frame
feature similarities. We show such a strategy can handle large motions without
sacrificing quality. We further present a sequential training scheme to
maintain the smoothness and the sparsity of the motion/residual grids. Based on
ReRF, we design a special FVV codec that achieves three orders of magnitudes
compression rate and provides a companion ReRF player to support online
streaming of long-duration FVVs of dynamic scenes. Extensive experiments
demonstrate the effectiveness of ReRF for compactly representing dynamic
radiance fields, enabling an unprecedented free-viewpoint viewing experience in
speed and quality.
Related papers
- Neural NeRF Compression [19.853882143024]
Recent NeRFs utilize feature grids to improve rendering quality and speed.
These representations introduce significant storage overhead.
This paper presents a novel method for efficiently compressing a grid-based NeRF model.
arXiv Detail & Related papers (2024-06-13T09:12:26Z) - Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - JointRF: End-to-End Joint Optimization for Dynamic Neural Radiance Field Representation and Compression [39.403294185116]
We propose a novel end-to-end joint optimization scheme of dynamic NeRF representation and compression, called JointRF.
JointRF achieves significantly improved quality and compression efficiency against the previous methods.
arXiv Detail & Related papers (2024-05-23T11:32:46Z) - TeTriRF: Temporal Tri-Plane Radiance Fields for Efficient Free-Viewpoint
Video [47.82392246786268]
Temporal Tri-Plane Radiance Fields (TeTriRF) is a novel technology that significantly reduces the storage size for Free-Viewpoint Video (FVV)
TeTriRF introduces a hybrid representation with tri-planes and voxel grids to support scaling up to long-duration sequences and scenes.
We propose a group training scheme tailored to achieving high training efficiency and yielding temporally consistent, low-entropy scene representations.
arXiv Detail & Related papers (2023-12-10T23:00:24Z) - NeVRF: Neural Video-based Radiance Fields for Long-duration Sequences [53.8501224122952]
We propose a novel neural video-based radiance fields (NeVRF) representation.
NeVRF marries neural radiance field with image-based rendering to support photo-realistic novel view synthesis on long-duration dynamic inward-looking scenes.
Our experiments demonstrate the effectiveness of NeVRF in enabling long-duration sequence rendering, sequential data reconstruction, and compact data storage.
arXiv Detail & Related papers (2023-12-10T11:14:30Z) - OD-NeRF: Efficient Training of On-the-Fly Dynamic Neural Radiance Fields [63.04781030984006]
Dynamic neural radiance fields (dynamic NeRFs) have demonstrated impressive results in novel view synthesis on 3D dynamic scenes.
We propose OD-NeRF to efficiently train and render dynamic NeRFs on-the-fly which instead is capable of streaming the dynamic scene.
Our algorithm can achieve an interactive speed of 6FPS training and rendering on synthetic dynamic scenes on-the-fly, and a significant speed-up compared to the state-of-the-art on real-world dynamic scenes.
arXiv Detail & Related papers (2023-05-24T07:36:47Z) - Fast Dynamic Radiance Fields with Time-Aware Neural Voxels [106.69049089979433]
We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox.
Our framework accelerates the optimization of dynamic radiance fields while maintaining high rendering quality.
Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.
arXiv Detail & Related papers (2022-05-30T17:47:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.