LightSpeed: Light and Fast Neural Light Fields on Mobile Devices
- URL: http://arxiv.org/abs/2310.16832v2
- Date: Thu, 26 Oct 2023 20:02:03 GMT
- Title: LightSpeed: Light and Fast Neural Light Fields on Mobile Devices
- Authors: Aarush Gupta, Junli Cao, Chaoyang Wang, Ju Hu, Sergey Tulyakov, Jian
Ren, L\'aszl\'o A Jeni
- Abstract summary: Real-time novel-view image synthesis on mobile devices is prohibitive due to the limited computational power and storage.
Recent advances in neural light field representations have shown promising real-time view synthesis results on mobile devices.
- Score: 29.080086014074613
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-time novel-view image synthesis on mobile devices is prohibitive due to
the limited computational power and storage. Using volumetric rendering
methods, such as NeRF and its derivatives, on mobile devices is not suitable
due to the high computational cost of volumetric rendering. On the other hand,
recent advances in neural light field representations have shown promising
real-time view synthesis results on mobile devices. Neural light field methods
learn a direct mapping from a ray representation to the pixel color. The
current choice of ray representation is either stratified ray sampling or
Plucker coordinates, overlooking the classic light slab (two-plane)
representation, the preferred representation to interpolate between light field
views. In this work, we find that using the light slab representation is an
efficient representation for learning a neural light field. More importantly,
it is a lower-dimensional ray representation enabling us to learn the 4D ray
space using feature grids which are significantly faster to train and render.
Although mostly designed for frontal views, we show that the light-slab
representation can be further extended to non-frontal scenes using a
divide-and-conquer strategy. Our method offers superior rendering quality
compared to previous light field methods and achieves a significantly improved
trade-off between rendering quality and speed.
Related papers
- UniVoxel: Fast Inverse Rendering by Unified Voxelization of Scene Representation [66.95976870627064]
We design a Unified Voxelization framework for explicit learning of scene representations, dubbed UniVoxel.
We propose to encode a scene into a latent volumetric representation, based on which the geometry, materials and illumination can be readily learned via lightweight neural networks.
Experiments show that UniVoxel boosts the optimization efficiency significantly compared to other methods, reducing the per-scene training time from hours to 18 minutes, while achieving favorable reconstruction quality.
arXiv Detail & Related papers (2024-07-28T17:24:14Z) - Neural Free-Viewpoint Relighting for Glossy Indirect Illumination [44.32630651762033]
We show a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view.
We demonstrate real-time rendering of challenging scenes involving view-dependent reflections and even caustics.
arXiv Detail & Related papers (2023-07-12T17:56:09Z) - Efficient Neural Radiance Fields with Learned Depth-Guided Sampling [43.79307270743013]
We present a hybrid scene representation which combines the best of implicit radiance fields and explicit depth maps for efficient rendering.
Experiments show that the proposed approach exhibits state-of-the-art performance on the DTU, Real Forward-facing and NeRF Synthetic datasets.
We also demonstrate the capability of our method to synthesize free-viewpoint videos of dynamic human performers in real-time.
arXiv Detail & Related papers (2021-12-02T18:59:32Z) - Neural Point Light Fields [80.98651520818785]
We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud.
These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax.
arXiv Detail & Related papers (2021-12-02T18:20:10Z) - TermiNeRF: Ray Termination Prediction for Efficient Neural Rendering [18.254077751772005]
Volume rendering using neural fields has shown great promise in capturing and synthesizing novel views of 3D scenes.
This type of approach requires querying the volume network at multiple points along each viewing ray in order to render an image, resulting in very slow rendering times.
We present a method that overcomes this limitation by learning a direct mapping from camera rays to locations along the ray that are most likely to influence the pixel's final appearance.
arXiv Detail & Related papers (2021-11-05T17:50:44Z) - Learning Neural Transmittance for Efficient Rendering of Reflectance
Fields [43.24427791156121]
We propose a novel method based on precomputed Neural Transmittance Functions to accelerate rendering of neural reflectance fields.
Results on real and synthetic scenes demonstrate almost two order of magnitude speedup for renderings under environment maps with minimal accuracy loss.
arXiv Detail & Related papers (2021-10-25T21:12:25Z) - Fast Training of Neural Lumigraph Representations using Meta Learning [109.92233234681319]
We develop a new neural rendering approach with the goal of quickly learning a high-quality representation which can also be rendered in real-time.
Our approach, MetaNLR++, accomplishes this by using a unique combination of a neural shape representation and 2D CNN-based image feature extraction, aggregation, and re-projection.
We show that MetaNLR++ achieves similar or better photorealistic novel view synthesis results in a fraction of the time that competing methods require.
arXiv Detail & Related papers (2021-06-28T18:55:50Z) - Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering [60.02806355570514]
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
arXiv Detail & Related papers (2021-06-04T17:54:49Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [78.5281048849446]
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes.
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network.
Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses.
arXiv Detail & Related papers (2020-03-19T17:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.