Balanced Spherical Grid for Egocentric View Synthesis
- URL: http://arxiv.org/abs/2303.12408v2
- Date: Fri, 24 Mar 2023 08:36:20 GMT
- Title: Balanced Spherical Grid for Egocentric View Synthesis
- Authors: Changwoon Choi, Sang Min Kim, Young Min Kim
- Abstract summary: We present EgoNeRF, a practical solution to reconstruct large-scale real-world environments for VR assets.
Given a few seconds of casually captured 360 video, EgoNeRF can efficiently build neural radiance fields.
- Score: 6.518792457424123
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present EgoNeRF, a practical solution to reconstruct large-scale
real-world environments for VR assets. Given a few seconds of casually captured
360 video, EgoNeRF can efficiently build neural radiance fields which enable
high-quality rendering from novel viewpoints. Motivated by the recent
acceleration of NeRF using feature grids, we adopt spherical coordinate instead
of conventional Cartesian coordinate. Cartesian feature grid is inefficient to
represent large-scale unbounded scenes because it has a spatially uniform
resolution, regardless of distance from viewers. The spherical parameterization
better aligns with the rays of egocentric images, and yet enables factorization
for performance enhancement. However, the na\"ive spherical grid suffers from
irregularities at two poles, and also cannot represent unbounded scenes. To
avoid singularities near poles, we combine two balanced grids, which results in
a quasi-uniform angular grid. We also partition the radial grid exponentially
and place an environment map at infinity to represent unbounded scenes.
Furthermore, with our resampling technique for grid-based methods, we can
increase the number of valid samples to train NeRF volume. We extensively
evaluate our method in our newly introduced synthetic and real-world egocentric
360 video datasets, and it consistently achieves state-of-the-art performance.
Related papers
- Aerial-NeRF: Adaptive Spatial Partitioning and Sampling for Large-Scale Aerial Rendering [10.340739248752516]
We propose Aerial-NeRF to render complex aerial scenes with high-precision.
Our model allows us to perform rendering over 4 times as fast as compared to multiple competitors.
New state-of-the-art results have been achieved on two public large-scale aerial datasets.
arXiv Detail & Related papers (2024-05-10T02:57:02Z) - Mip-Grid: Anti-aliased Grid Representations for Neural Radiance Fields [12.910072009005065]
We present mip-blur, a novel approach that integrates anti-aliasing techniques into grid-based representations for radiance fields.
The proposed method generates multi-scale grids by applying simple convolution operations over a shared grid representation and uses the scale coordinate to retrieve features at different scales from the generated multi-scale grids.
arXiv Detail & Related papers (2024-02-22T00:45:40Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - PyNeRF: Pyramidal Neural Radiance Fields [51.25406129834537]
We propose a simple modification to grid-based models by training model heads at different spatial grid resolutions.
At render time, we simply use coarser grids to render samples that cover larger volumes.
Compared to Mip-NeRF, we reduce error rates by 20% while training over 60x faster.
arXiv Detail & Related papers (2023-11-30T23:52:46Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields [64.13207562222094]
We show how a technique that combines mip-NeRF 360 and grid-based models can yield error rates that are 8% - 77% lower than either prior technique, and that trains 24x faster than mip-NeRF 360.
arXiv Detail & Related papers (2023-04-13T17:55:12Z) - Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos [69.22032459870242]
We present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time free-view rendering on long-duration dynamic scenes.
We show such a strategy can handle large motions without sacrificing quality.
Based on ReRF, we design a special FVV that achieves three orders of magnitudes compression rate and provides a companion ReRF player to support online streaming of long-duration FVVs of dynamic scenes.
arXiv Detail & Related papers (2023-04-10T08:36:00Z) - Grid-guided Neural Radiance Fields for Large Urban Scenes [146.06368329445857]
Recent approaches propose to geographically divide the scene and adopt multiple sub-NeRFs to model each region individually.
An alternative solution is to use a feature grid representation, which is computationally efficient and can naturally scale to a large scene.
We present a new framework that realizes high-fidelity rendering on large urban scenes while being computationally efficient.
arXiv Detail & Related papers (2023-03-24T13:56:45Z) - DeRF: Decomposed Radiance Fields [30.784481193893345]
In this paper, we propose a technique based on spatial decomposition capable of mitigating this issue.
We show that a Voronoi spatial decomposition is preferable for this purpose, as it is provably compatible with the Painter's Algorithm.
Our experiments show that for real-world scenes, our method provides up to 3x more efficient inference than NeRF.
arXiv Detail & Related papers (2020-11-25T02:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.