Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
- URL: http://arxiv.org/abs/2304.06706v3
- Date: Thu, 26 Oct 2023 22:19:56 GMT
- Title: Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
- Authors: Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan,
Peter Hedman
- Abstract summary: We show how a technique that combines mip-NeRF 360 and grid-based models can yield error rates that are 8% - 77% lower than either prior technique, and that trains 24x faster than mip-NeRF 360.
- Score: 64.13207562222094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Field training can be accelerated through the use of
grid-based representations in NeRF's learned mapping from spatial coordinates
to colors and volumetric density. However, these grid-based approaches lack an
explicit understanding of scale and therefore often introduce aliasing, usually
in the form of jaggies or missing scene content. Anti-aliasing has previously
been addressed by mip-NeRF 360, which reasons about sub-volumes along a cone
rather than points along a ray, but this approach is not natively compatible
with current grid-based techniques. We show how ideas from rendering and signal
processing can be used to construct a technique that combines mip-NeRF 360 and
grid-based models such as Instant NGP to yield error rates that are 8% - 77%
lower than either prior technique, and that trains 24x faster than mip-NeRF
360.
Related papers
- Freq-Mip-AA : Frequency Mip Representation for Anti-Aliasing Neural Radiance Fields [3.796287987989994]
Mip-NeRF proposed using frustums to render a pixel and suggested integrated positional encoding (IPE)
While effective, this approach requires long training times due to its reliance on volumetric architecture.
We propose a novel anti-aliasing technique that utilizes grid-based representations, usually showing significantly faster training time.
arXiv Detail & Related papers (2024-06-19T06:33:56Z) - Mip-Grid: Anti-aliased Grid Representations for Neural Radiance Fields [12.910072009005065]
We present mip-blur, a novel approach that integrates anti-aliasing techniques into grid-based representations for radiance fields.
The proposed method generates multi-scale grids by applying simple convolution operations over a shared grid representation and uses the scale coordinate to retrieve features at different scales from the generated multi-scale grids.
arXiv Detail & Related papers (2024-02-22T00:45:40Z) - PyNeRF: Pyramidal Neural Radiance Fields [51.25406129834537]
We propose a simple modification to grid-based models by training model heads at different spatial grid resolutions.
At render time, we simply use coarser grids to render samples that cover larger volumes.
Compared to Mip-NeRF, we reduce error rates by 20% while training over 60x faster.
arXiv Detail & Related papers (2023-11-30T23:52:46Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering [84.37776381343662]
Mip-NeRF proposes a multiscale representation as a conical frustum to encode scale information.
We propose mip voxel grids (Mip-VoG), an explicit multiscale representation for real-time anti-aliasing rendering.
Our approach is the first to offer multiscale training and real-time anti-aliasing rendering simultaneously.
arXiv Detail & Related papers (2023-04-20T04:05:22Z) - Balanced Spherical Grid for Egocentric View Synthesis [6.518792457424123]
We present EgoNeRF, a practical solution to reconstruct large-scale real-world environments for VR assets.
Given a few seconds of casually captured 360 video, EgoNeRF can efficiently build neural radiance fields.
arXiv Detail & Related papers (2023-03-22T09:17:01Z) - Boosting Point Clouds Rendering via Radiance Mapping [49.24193509772339]
We focus on boosting the image quality of point clouds rendering with a compact model design.
We simplify the NeRF representation to a spatial mapping function which only requires single evaluation per pixel.
Our method achieves the state-of-the-art rendering on point clouds, outperforming prior works by notable margins.
arXiv Detail & Related papers (2022-10-27T01:25:57Z) - Point-NeRF: Point-based Neural Radiance Fields [39.38262052015925]
Point-NeRF uses neural 3D point clouds, with associated neural features, to model a radiance field.
It can be rendered efficiently by aggregating neural point features near scene surfaces, in a ray marching-based rendering pipeline.
Point-NeRF can be combined with other 3D reconstruction methods and handles the errors and outliers via a novel pruning and growing mechanism.
arXiv Detail & Related papers (2022-01-21T18:59:20Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.