Mip-Grid: Anti-aliased Grid Representations for Neural Radiance Fields
- URL: http://arxiv.org/abs/2402.14196v1
- Date: Thu, 22 Feb 2024 00:45:40 GMT
- Title: Mip-Grid: Anti-aliased Grid Representations for Neural Radiance Fields
- Authors: Seungtae Nam, Daniel Rho, Jong Hwan Ko, Eunbyung Park
- Abstract summary: We present mip-blur, a novel approach that integrates anti-aliasing techniques into grid-based representations for radiance fields.
The proposed method generates multi-scale grids by applying simple convolution operations over a shared grid representation and uses the scale coordinate to retrieve features at different scales from the generated multi-scale grids.
- Score: 12.910072009005065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the remarkable achievements of neural radiance fields (NeRF) in
representing 3D scenes and generating novel view images, the aliasing issue,
rendering "jaggies" or "blurry" images at varying camera distances, remains
unresolved in most existing approaches. The recently proposed mip-NeRF has
addressed this challenge by rendering conical frustums instead of rays.
However, it relies on MLP architecture to represent the radiance fields,
missing out on the fast training speed offered by the latest grid-based
methods. In this work, we present mip-Grid, a novel approach that integrates
anti-aliasing techniques into grid-based representations for radiance fields,
mitigating the aliasing artifacts while enjoying fast training time. The
proposed method generates multi-scale grids by applying simple convolution
operations over a shared grid representation and uses the scale-aware
coordinate to retrieve features at different scales from the generated
multi-scale grids. To test the effectiveness, we integrated the proposed method
into the two recent representative grid-based methods, TensoRF and K-Planes.
Experimental results demonstrate that mip-Grid greatly improves the rendering
performance of both methods and even outperforms mip-NeRF on multi-scale
datasets while achieving significantly faster training time. For code and demo
videos, please see https://stnamjef.github.io/mipgrid.github.io/.
Related papers
- Freq-Mip-AA : Frequency Mip Representation for Anti-Aliasing Neural Radiance Fields [3.796287987989994]
Mip-NeRF proposed using frustums to render a pixel and suggested integrated positional encoding (IPE)
While effective, this approach requires long training times due to its reliance on volumetric architecture.
We propose a novel anti-aliasing technique that utilizes grid-based representations, usually showing significantly faster training time.
arXiv Detail & Related papers (2024-06-19T06:33:56Z) - PyNeRF: Pyramidal Neural Radiance Fields [51.25406129834537]
We propose a simple modification to grid-based models by training model heads at different spatial grid resolutions.
At render time, we simply use coarser grids to render samples that cover larger volumes.
Compared to Mip-NeRF, we reduce error rates by 20% while training over 60x faster.
arXiv Detail & Related papers (2023-11-30T23:52:46Z) - Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering [84.37776381343662]
Mip-NeRF proposes a multiscale representation as a conical frustum to encode scale information.
We propose mip voxel grids (Mip-VoG), an explicit multiscale representation for real-time anti-aliasing rendering.
Our approach is the first to offer multiscale training and real-time anti-aliasing rendering simultaneously.
arXiv Detail & Related papers (2023-04-20T04:05:22Z) - Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields [64.13207562222094]
We show how a technique that combines mip-NeRF 360 and grid-based models can yield error rates that are 8% - 77% lower than either prior technique, and that trains 24x faster than mip-NeRF 360.
arXiv Detail & Related papers (2023-04-13T17:55:12Z) - Grid-guided Neural Radiance Fields for Large Urban Scenes [146.06368329445857]
Recent approaches propose to geographically divide the scene and adopt multiple sub-NeRFs to model each region individually.
An alternative solution is to use a feature grid representation, which is computationally efficient and can naturally scale to a large scene.
We present a new framework that realizes high-fidelity rendering on large urban scenes while being computationally efficient.
arXiv Detail & Related papers (2023-03-24T13:56:45Z) - SPARF: Large-Scale Learning of 3D Sparse Radiance Fields from Few Input
Images [62.64942825962934]
We present SPARF, a large-scale ShapeNet-based synthetic dataset for novel view synthesis.
We propose a novel pipeline (SuRFNet) that learns to generate sparse voxel radiance fields from only few views.
SuRFNet employs partial SRFs from few/one images and a specialized SRF loss to learn to generate high-quality sparse voxel radiance fields.
arXiv Detail & Related papers (2022-12-18T14:56:22Z) - Neural Deformable Voxel Grid for Fast Optimization of Dynamic View
Synthesis [63.25919018001152]
We propose a fast deformable radiance field method to handle dynamic scenes.
Our method achieves comparable performance to D-NeRF using only 20 minutes for training.
arXiv Detail & Related papers (2022-06-15T17:49:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.