Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance
Fields
- URL: http://arxiv.org/abs/2103.13415v1
- Date: Wed, 24 Mar 2021 18:02:11 GMT
- Title: Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance
Fields
- Authors: Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman,
Ricardo Martin-Brualla, Pratul P. Srinivasan
- Abstract summary: "mip-NeRF" (a la "mipmap"), extends NeRF to represent the scene at a continuously-valued scale.
By efficiently rendering anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable aliasing artifacts.
Compared to NeRF, mip-NeRF reduces average error rates by 16% on the dataset presented with NeRF and by 60% on a challenging multiscale variant of that dataset.
- Score: 45.84983186882732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rendering procedure used by neural radiance fields (NeRF) samples a scene
with a single ray per pixel and may therefore produce renderings that are
excessively blurred or aliased when training or testing images observe scene
content at different resolutions. The straightforward solution of supersampling
by rendering with multiple rays per pixel is impractical for NeRF, because
rendering each ray requires querying a multilayer perceptron hundreds of times.
Our solution, which we call "mip-NeRF" (a la "mipmap"), extends NeRF to
represent the scene at a continuously-valued scale. By efficiently rendering
anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable
aliasing artifacts and significantly improves NeRF's ability to represent fine
details, while also being 7% faster than NeRF and half the size. Compared to
NeRF, mip-NeRF reduces average error rates by 16% on the dataset presented with
NeRF and by 60% on a challenging multiscale variant of that dataset that we
present. Mip-NeRF is also able to match the accuracy of a brute-force
supersampled NeRF on our multiscale dataset while being 22x faster.
Related papers
- PyNeRF: Pyramidal Neural Radiance Fields [51.25406129834537]
We propose a simple modification to grid-based models by training model heads at different spatial grid resolutions.
At render time, we simply use coarser grids to render samples that cover larger volumes.
Compared to Mip-NeRF, we reduce error rates by 20% while training over 60x faster.
arXiv Detail & Related papers (2023-11-30T23:52:46Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - R2L: Distilling Neural Radiance Field to Neural Light Field for
Efficient Novel View Synthesis [76.07010495581535]
Rendering a single pixel requires querying the Neural Radiance Field network hundreds of times.
NeLF presents a more straightforward representation over NeRF in novel view.
We show the key to successfully learning a deep NeLF network is to have sufficient data.
arXiv Detail & Related papers (2022-03-31T17:57:05Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.