Re-ReND: Real-time Rendering of NeRFs across Devices
- URL: http://arxiv.org/abs/2303.08717v1
- Date: Wed, 15 Mar 2023 15:59:41 GMT
- Title: Re-ReND: Real-time Rendering of NeRFs across Devices
- Authors: Sara Rojas, Jesus Zarzar, Juan Camilo Perez, Artsiom Sanakoyeu, Ali
Thabet, Albert Pumarola, and Bernard Ghanem
- Abstract summary: Re-ReND is designed to achieve real-time performance by converting the NeRF into a representation that can be efficiently processed by standard graphics pipelines.
We find that Re-ReND can achieve over a 2.6-fold increase in rendering speed versus the state-of-the-art without perceptible losses in quality.
- Score: 56.081995086924216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a novel approach for rendering a pre-trained Neural
Radiance Field (NeRF) in real-time on resource-constrained devices. We
introduce Re-ReND, a method enabling Real-time Rendering of NeRFs across
Devices. Re-ReND is designed to achieve real-time performance by converting the
NeRF into a representation that can be efficiently processed by standard
graphics pipelines. The proposed method distills the NeRF by extracting the
learned density into a mesh, while the learned color information is factorized
into a set of matrices that represent the scene's light field. Factorization
implies the field is queried via inexpensive MLP-free matrix multiplications,
while using a light field allows rendering a pixel by querying the field a
single time-as opposed to hundreds of queries when employing a radiance field.
Since the proposed representation can be implemented using a fragment shader,
it can be directly integrated with standard rasterization frameworks. Our
flexible implementation can render a NeRF in real-time with low memory
requirements and on a wide range of resource-constrained devices, including
mobiles and AR/VR headsets. Notably, we find that Re-ReND can achieve over a
2.6-fold increase in rendering speed versus the state-of-the-art without
perceptible losses in quality.
Related papers
- GMT: Enhancing Generalizable Neural Rendering via Geometry-Driven Multi-Reference Texture Transfer [40.70828307740121]
Novel view synthesis (NVS) aims to generate images at arbitrary viewpoints using multi-view images, and recent insights from neural radiance fields (NeRF) have contributed to remarkable improvements.
G-NeRF still struggles in representing fine details for a specific scene due to the absence of per-scene optimization.
We propose a Geometry-driven Multi-reference Texture transfer network (GMT) available as a plug-and-play module designed for G-NeRF.
arXiv Detail & Related papers (2024-10-01T13:30:51Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - MixRT: Mixed Neural Representations For Real-Time NeRF Rendering [24.040636076067393]
We propose MixRT, a novel NeRF representation that includes a low-quality mesh, a view-dependent displacement map, and a compressed NeRF model.
This design effectively harnesses the capabilities of existing graphics hardware, thus enabling real-time NeRF rendering on edge devices.
arXiv Detail & Related papers (2023-12-19T04:14:11Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos [69.22032459870242]
We present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time free-view rendering on long-duration dynamic scenes.
We show such a strategy can handle large motions without sacrificing quality.
Based on ReRF, we design a special FVV that achieves three orders of magnitudes compression rate and provides a companion ReRF player to support online streaming of long-duration FVVs of dynamic scenes.
arXiv Detail & Related papers (2023-04-10T08:36:00Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.