Adaptive Shells for Efficient Neural Radiance Field Rendering
- URL: http://arxiv.org/abs/2311.10091v1
- Date: Thu, 16 Nov 2023 18:58:55 GMT
- Title: Adaptive Shells for Efficient Neural Radiance Field Rendering
- Authors: Zian Wang, Tianchang Shen, Merlin Nimier-David, Nicholas Sharp, Jun
Gao, Alexander Keller, Sanja Fidler, Thomas M\"uller, Zan Gojcic
- Abstract summary: We propose a neural radiance formulation that smoothly transitions between- and surface-based rendering.
Our approach enables efficient rendering at very high fidelity.
We also demonstrate that the extracted envelope enables downstream applications such as animation and simulation.
- Score: 92.18962730460842
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural radiance fields achieve unprecedented quality for novel view
synthesis, but their volumetric formulation remains expensive, requiring a huge
number of samples to render high-resolution images. Volumetric encodings are
essential to represent fuzzy geometry such as foliage and hair, and they are
well-suited for stochastic optimization. Yet, many scenes ultimately consist
largely of solid surfaces which can be accurately rendered by a single sample
per pixel. Based on this insight, we propose a neural radiance formulation that
smoothly transitions between volumetric- and surface-based rendering, greatly
accelerating rendering speed and even improving visual fidelity. Our method
constructs an explicit mesh envelope which spatially bounds a neural volumetric
representation. In solid regions, the envelope nearly converges to a surface
and can often be rendered with a single sample. To this end, we generalize the
NeuS formulation with a learned spatially-varying kernel size which encodes the
spread of the density, fitting a wide kernel to volume-like regions and a tight
kernel to surface-like regions. We then extract an explicit mesh of a narrow
band around the surface, with width determined by the kernel size, and
fine-tune the radiance field within this band. At inference time, we cast rays
against the mesh and evaluate the radiance field only within the enclosed
region, greatly reducing the number of samples required. Experiments show that
our approach enables efficient rendering at very high fidelity. We also
demonstrate that the extracted envelope enables downstream applications such as
animation and simulation.
Related papers
- Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based
View Synthesis [70.40950409274312]
We modify density fields to encourage them to converge towards surfaces, without compromising their ability to reconstruct thin structures.
We also develop a fusion-based meshing strategy followed by mesh simplification and appearance model fitting.
The compact meshes produced by our model can be rendered in real-time on mobile devices.
arXiv Detail & Related papers (2024-02-19T18:59:41Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - PDF: Point Diffusion Implicit Function for Large-scale Scene Neural
Representation [24.751481680565803]
We propose a Point implicit Function, PDF, for large-scale scene neural representation.
The core of our method is a large-scale point cloud super-resolution diffusion module.
The region sampling based on Mip-NeRF 360 is employed to model the background representation.
arXiv Detail & Related papers (2023-11-03T08:19:47Z) - Online Neural Path Guiding with Normalized Anisotropic Spherical
Gaussians [20.68953631807367]
We propose a novel online framework to learn the spatial-varying density model with a single small neural network.
Our framework learns the distribution in a progressive manner and does not need any warm-up phases.
arXiv Detail & Related papers (2023-03-11T05:22:42Z) - AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance
Fields [8.214695794896127]
Novel view synthesis has recently been revolutionized by learning neural radiance fields directly from sparse observations.
rendering images with this new paradigm is slow due to the fact that an accurate quadrature of the volume rendering equation requires a large number of samples for each ray.
We propose a novel dual-network architecture that takes an direction by learning how to best reduce the number of required sample points.
arXiv Detail & Related papers (2022-07-21T05:59:13Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - Neural Point Light Fields [80.98651520818785]
We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud.
These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax.
arXiv Detail & Related papers (2021-12-02T18:20:10Z) - NeuSample: Neural Sample Field for Efficient View Synthesis [129.10351459066501]
We propose a lightweight module which names a neural sample field.
The proposed sample field maps rays into sample distributions, which can be transformed into point coordinates and fed into radiance fields for volume rendering.
We show that NeuSample achieves better rendering quality than NeRF while enjoying a faster inference speed.
arXiv Detail & Related papers (2021-11-30T16:43:49Z) - Learning Neural Transmittance for Efficient Rendering of Reflectance
Fields [43.24427791156121]
We propose a novel method based on precomputed Neural Transmittance Functions to accelerate rendering of neural reflectance fields.
Results on real and synthetic scenes demonstrate almost two order of magnitude speedup for renderings under environment maps with minimal accuracy loss.
arXiv Detail & Related papers (2021-10-25T21:12:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.