Real-Time Neural Light Field on Mobile Devices
- URL: http://arxiv.org/abs/2212.08057v2
- Date: Sat, 24 Jun 2023 20:48:05 GMT
- Title: Real-Time Neural Light Field on Mobile Devices
- Authors: Junli Cao, Huan Wang, Pavlo Chemerys, Vladislav Shakhrai, Ju Hu, Yun
Fu, Denys Makoviichuk, Sergey Tulyakov, Jian Ren
- Abstract summary: We introduce a novel network architecture that runs efficiently on mobile devices with low latency and small size.
Our model achieves high-resolution generation while maintaining real-time inference for both synthetic and real-world scenes.
- Score: 54.44982318758239
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent efforts in Neural Rendering Fields (NeRF) have shown impressive
results on novel view synthesis by utilizing implicit neural representation to
represent 3D scenes. Due to the process of volumetric rendering, the inference
speed for NeRF is extremely slow, limiting the application scenarios of
utilizing NeRF on resource-constrained hardware, such as mobile devices. Many
works have been conducted to reduce the latency of running NeRF models.
However, most of them still require high-end GPU for acceleration or extra
storage memory, which is all unavailable on mobile devices. Another emerging
direction utilizes the neural light field (NeLF) for speedup, as only one
forward pass is performed on a ray to predict the pixel color. Nevertheless, to
reach a similar rendering quality as NeRF, the network in NeLF is designed with
intensive computation, which is not mobile-friendly. In this work, we propose
an efficient network that runs in real-time on mobile devices for neural
rendering. We follow the setting of NeLF to train our network. Unlike existing
works, we introduce a novel network architecture that runs efficiently on
mobile devices with low latency and small size, i.e., saving $15\times \sim
24\times$ storage compared with MobileNeRF. Our model achieves high-resolution
generation while maintaining real-time inference for both synthetic and
real-world scenes on mobile devices, e.g., $18.04$ms (iPhone 13) for rendering
one $1008\times756$ image of real 3D scenes. Additionally, we achieve similar
image quality as NeRF and better quality than MobileNeRF (PSNR $26.15$ vs.
$25.91$ on the real-world forward-facing dataset).
Related papers
- Efficient Neural Light Fields (ENeLF) for Mobile Devices [0.0]
This research builds upon the novel network architecture introduced by MobileR2L to produce a model that runs efficiently on mobile devices with lower latency and smaller sizes.
arXiv Detail & Related papers (2024-06-02T02:55:52Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - MEIL-NeRF: Memory-Efficient Incremental Learning of Neural Radiance
Fields [49.68916478541697]
We develop a Memory-Efficient Incremental Learning algorithm for NeRF (MEIL-NeRF)
MEIL-NeRF takes inspiration from NeRF itself in that a neural network can serve as a memory that provides the pixel RGB values, given rays as queries.
As a result, MEIL-NeRF demonstrates constant memory consumption and competitive performance.
arXiv Detail & Related papers (2022-12-16T08:04:56Z) - Scaling Neural Face Synthesis to High FPS and Low Latency by Neural
Caching [12.362614824541824]
Recent neural rendering approaches greatly improve image quality, reaching near photorealism.
The underlying neural networks have high runtime, precluding telepresence and virtual reality applications that require high resolution at low latency.
We break this dependency by caching information from the previous frame to speed up the processing of the current one with an implicit warp.
We test the approach on view-dependent rendering of 3D portrait avatars, as needed for telepresence, on established benchmark sequences.
arXiv Detail & Related papers (2022-11-10T18:58:00Z) - R2L: Distilling Neural Radiance Field to Neural Light Field for
Efficient Novel View Synthesis [76.07010495581535]
Rendering a single pixel requires querying the Neural Radiance Field network hundreds of times.
NeLF presents a more straightforward representation over NeRF in novel view.
We show the key to successfully learning a deep NeLF network is to have sufficient data.
arXiv Detail & Related papers (2022-03-31T17:57:05Z) - NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction [50.54946139497575]
We propose NeRFusion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achieve efficient large-scale reconstruction and photo-realistic rendering.
We demonstrate that NeRFusion achieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods.
arXiv Detail & Related papers (2022-03-21T18:56:35Z) - Baking Neural Radiance Fields for Real-Time View Synthesis [41.07052395570522]
We present a method to train a NeRF, then precompute and store (i.e. "bake") it as a novel representation called a Sparse Neural Radiance Grid (SNeRG)
The resulting scene representation retains NeRF's ability to render fine geometric details and view-dependent appearance, is compact, and can be rendered in real-time.
arXiv Detail & Related papers (2021-03-26T17:59:52Z) - FastNeRF: High-Fidelity Neural Rendering at 200FPS [17.722927021159393]
We propose FastNeRF, a system capable of rendering high fidelity images at 200Hz on a high-end consumer GPU.
The proposed method is 3000 times faster than the original NeRF algorithm and at least an order of magnitude faster than existing work on accelerating NeRF.
arXiv Detail & Related papers (2021-03-18T17:09:12Z) - RT3D: Achieving Real-Time Execution of 3D Convolutional Neural Networks
on Mobile Devices [57.877112704841366]
This paper proposes RT3D, a model compression and mobile acceleration framework for 3D CNNs.
For the first time, real-time execution of 3D CNNs is achieved on off-the-shelf mobiles.
arXiv Detail & Related papers (2020-07-20T02:05:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.