VideoRF: Rendering Dynamic Radiance Fields as 2D Feature Video Streams
- URL: http://arxiv.org/abs/2312.01407v1
- Date: Sun, 3 Dec 2023 14:14:35 GMT
- Title: VideoRF: Rendering Dynamic Radiance Fields as 2D Feature Video Streams
- Authors: Liao Wang, Kaixin Yao, Chengcheng Guo, Zhirui Zhang, Qiang Hu, Jingyi
Yu, Lan Xu, Minye Wu
- Abstract summary: VideoRF is the first approach to enable real-time streaming and rendering of dynamic radiance fields on mobile platforms.
We show that the feature image stream can be efficiently compressed by 2D video codecs.
We have developed a real-time interactive player that enables online streaming and rendering of dynamic scenes.
- Score: 56.00479598817949
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Radiance Fields (NeRFs) excel in photorealistically rendering static
scenes. However, rendering dynamic, long-duration radiance fields on ubiquitous
devices remains challenging, due to data storage and computational constraints.
In this paper, we introduce VideoRF, the first approach to enable real-time
streaming and rendering of dynamic radiance fields on mobile platforms. At the
core is a serialized 2D feature image stream representing the 4D radiance field
all in one. We introduce a tailored training scheme directly applied to this 2D
domain to impose the temporal and spatial redundancy of the feature image
stream. By leveraging the redundancy, we show that the feature image stream can
be efficiently compressed by 2D video codecs, which allows us to exploit video
hardware accelerators to achieve real-time decoding. On the other hand, based
on the feature image stream, we propose a novel rendering pipeline for VideoRF,
which has specialized space mappings to query radiance properties efficiently.
Paired with a deferred shading model, VideoRF has the capability of real-time
rendering on mobile devices thanks to its efficiency. We have developed a
real-time interactive player that enables online streaming and rendering of
dynamic scenes, offering a seamless and immersive free-viewpoint experience
across a range of devices, from desktops to mobile phones.
Related papers
- 3D Gaussian Splatting for Real-Time Radiance Field Rendering [4.320393382724066]
We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times.
We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.
arXiv Detail & Related papers (2023-08-08T06:37:06Z) - Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos [69.22032459870242]
We present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time free-view rendering on long-duration dynamic scenes.
We show such a strategy can handle large motions without sacrificing quality.
Based on ReRF, we design a special FVV that achieves three orders of magnitudes compression rate and provides a companion ReRF player to support online streaming of long-duration FVVs of dynamic scenes.
arXiv Detail & Related papers (2023-04-10T08:36:00Z) - MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in
Unbounded Scenes [61.01853377661283]
We present a Memory-Efficient Radiance Field representation that achieves real-time rendering of large-scale scenes in a browser.
We introduce a novel contraction function that maps scene coordinates into a bounded volume while still allowing for efficient ray-box intersection.
arXiv Detail & Related papers (2023-02-23T18:59:07Z) - NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed
Neural Radiance Fields [99.57774680640581]
We present an efficient framework capable of fast reconstruction, compact modeling, and streamable rendering.
We propose to decompose the 4D space according to temporal characteristics. Points in the 4D space are associated with probabilities belonging to three categories: static, deforming, and new areas.
arXiv Detail & Related papers (2022-10-28T07:11:05Z) - Fast Dynamic Radiance Fields with Time-Aware Neural Voxels [106.69049089979433]
We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox.
Our framework accelerates the optimization of dynamic radiance fields while maintaining high rendering quality.
Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.
arXiv Detail & Related papers (2022-05-30T17:47:31Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.