Efficient View Synthesis with Neural Radiance Distribution Field
- URL: http://arxiv.org/abs/2308.11130v1
- Date: Tue, 22 Aug 2023 02:23:28 GMT
- Title: Efficient View Synthesis with Neural Radiance Distribution Field
- Authors: Yushuang Wu, Xiao Li, Jinglu Wang, Xiaoguang Han, Shuguang Cui, Yan Lu
- Abstract summary: We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
- Score: 61.22920276806721
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work on Neural Radiance Fields (NeRF) has demonstrated significant
advances in high-quality view synthesis. A major limitation of NeRF is its low
rendering efficiency due to the need for multiple network forwardings to render
a single pixel. Existing methods to improve NeRF either reduce the number of
required samples or optimize the implementation to accelerate the network
forwarding. Despite these efforts, the problem of multiple sampling persists
due to the intrinsic representation of radiance fields. In contrast, Neural
Light Fields (NeLF) reduce the computation cost of NeRF by querying only one
single network forwarding per pixel. To achieve a close visual quality to NeRF,
existing NeLF methods require significantly larger network capacities which
limits their rendering efficiency in practice. In this work, we propose a new
representation called Neural Radiance Distribution Field (NeRDF) that targets
efficient view synthesis in real-time. Specifically, we use a small network
similar to NeRF while preserving the rendering speed with a single network
forwarding per pixel as in NeLF. The key is to model the radiance distribution
along each ray with frequency basis and predict frequency weights using the
network. Pixel values are then computed via volume rendering on radiance
distributions. Experiments show that our proposed method offers a better
trade-off among speed, quality, and network size than existing methods: we
achieve a ~254x speed-up over NeRF with similar network size, with only a
marginal performance decline. Our project page is at
yushuang-wu.github.io/NeRDF.
Related papers
- ProNeRF: Learning Efficient Projection-Aware Ray Sampling for
Fine-Grained Implicit Neural Radiance Fields [27.008124938806944]
We propose ProNeRF, which provides an optimal trade-off between memory footprint (similar to NeRF), speed (faster than HyperReel), and quality (better than K-Planes)
Our ProNeRF yields state-of-the-art metrics, being 15-23x faster with 0.65dB higher PSNR than NeRF and yielding 0.95dB higher PSNR than the best published sampler-based method, HyperReel.
arXiv Detail & Related papers (2023-12-13T13:37:32Z) - Re-ReND: Real-time Rendering of NeRFs across Devices [56.081995086924216]
Re-ReND is designed to achieve real-time performance by converting the NeRF into a representation that can be efficiently processed by standard graphics pipelines.
We find that Re-ReND can achieve over a 2.6-fold increase in rendering speed versus the state-of-the-art without perceptible losses in quality.
arXiv Detail & Related papers (2023-03-15T15:59:41Z) - MEIL-NeRF: Memory-Efficient Incremental Learning of Neural Radiance
Fields [49.68916478541697]
We develop a Memory-Efficient Incremental Learning algorithm for NeRF (MEIL-NeRF)
MEIL-NeRF takes inspiration from NeRF itself in that a neural network can serve as a memory that provides the pixel RGB values, given rays as queries.
As a result, MEIL-NeRF demonstrates constant memory consumption and competitive performance.
arXiv Detail & Related papers (2022-12-16T08:04:56Z) - R2L: Distilling Neural Radiance Field to Neural Light Field for
Efficient Novel View Synthesis [76.07010495581535]
Rendering a single pixel requires querying the Neural Radiance Field network hundreds of times.
NeLF presents a more straightforward representation over NeRF in novel view.
We show the key to successfully learning a deep NeLF network is to have sufficient data.
arXiv Detail & Related papers (2022-03-31T17:57:05Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - Recursive-NeRF: An Efficient and Dynamically Growing NeRF [34.768382663711705]
Recursive-NeRF is an efficient rendering and training approach for the Neural Radiance Field (NeRF) method.
Recursive-NeRF learns uncertainties for query coordinates, representing the quality of the predicted color and volumetric intensity at each level.
Our evaluation on three public datasets shows that Recursive-NeRF is more efficient than NeRF while providing state-of-the-art quality.
arXiv Detail & Related papers (2021-05-19T12:51:54Z) - Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance
Fields [45.84983186882732]
"mip-NeRF" (a la "mipmap"), extends NeRF to represent the scene at a continuously-valued scale.
By efficiently rendering anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable aliasing artifacts.
Compared to NeRF, mip-NeRF reduces average error rates by 16% on the dataset presented with NeRF and by 60% on a challenging multiscale variant of that dataset.
arXiv Detail & Related papers (2021-03-24T18:02:11Z) - DONeRF: Towards Real-Time Rendering of Neural Radiance Fields using
Depth Oracle Networks [6.2444658061424665]
DONeRF is a dual network design with a depth oracle network as a first step and a locally sampled shading network for ray accumulation.
We are the first to render raymarching-based neural representations at interactive frame rates (15 frames per second at 800x800) on a single GPU.
arXiv Detail & Related papers (2021-03-04T18:55:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.