TermiNeRF: Ray Termination Prediction for Efficient Neural Rendering
- URL: http://arxiv.org/abs/2111.03643v1
- Date: Fri, 5 Nov 2021 17:50:44 GMT
- Title: TermiNeRF: Ray Termination Prediction for Efficient Neural Rendering
- Authors: Martin Piala, Ronald Clark
- Abstract summary: Volume rendering using neural fields has shown great promise in capturing and synthesizing novel views of 3D scenes.
This type of approach requires querying the volume network at multiple points along each viewing ray in order to render an image, resulting in very slow rendering times.
We present a method that overcomes this limitation by learning a direct mapping from camera rays to locations along the ray that are most likely to influence the pixel's final appearance.
- Score: 18.254077751772005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Volume rendering using neural fields has shown great promise in capturing and
synthesizing novel views of 3D scenes. However, this type of approach requires
querying the volume network at multiple points along each viewing ray in order
to render an image, resulting in very slow rendering times. In this paper, we
present a method that overcomes this limitation by learning a direct mapping
from camera rays to locations along the ray that are most likely to influence
the pixel's final appearance. Using this approach we are able to render, train
and fine-tune a volumetrically-rendered neural field model an order of
magnitude faster than standard approaches. Unlike existing methods, our
approach works with general volumes and can be trained end-to-end.
Related papers
- N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - Local Implicit Ray Function for Generalizable Radiance Field
Representation [20.67358742158244]
We propose LIRF (Local Implicit Ray Function), a generalizable neural rendering approach for novel view rendering.
Given 3D positions within conical frustums, LIRF takes 3D coordinates and the features of conical frustums as inputs and predicts a local volumetric radiance field.
Since the coordinates are continuous, LIRF renders high-quality novel views at a continuously-valued scale via volume rendering.
arXiv Detail & Related papers (2023-04-25T11:52:33Z) - Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering [84.37776381343662]
Mip-NeRF proposes a multiscale representation as a conical frustum to encode scale information.
We propose mip voxel grids (Mip-VoG), an explicit multiscale representation for real-time anti-aliasing rendering.
Our approach is the first to offer multiscale training and real-time anti-aliasing rendering simultaneously.
arXiv Detail & Related papers (2023-04-20T04:05:22Z) - Learning Generalizable Light Field Networks from Few Images [7.672380267651058]
We present a new strategy for few-shot novel view synthesis based on a neural light field representation.
We show that our method achieves competitive performances on synthetic and real MVS data with respect to state-of-the-art neural radiance field based competition.
arXiv Detail & Related papers (2022-07-24T14:47:11Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - IBRNet: Learning Multi-View Image-Based Rendering [67.15887251196894]
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views.
By drawing on source views at render time, our method hearkens back to classic work on image-based rendering.
arXiv Detail & Related papers (2021-02-25T18:56:21Z) - pixelNeRF: Neural Radiance Fields from One or Few Images [20.607712035278315]
pixelNeRF is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images.
We conduct experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects.
In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction.
arXiv Detail & Related papers (2020-12-03T18:59:54Z) - TRANSPR: Transparency Ray-Accumulating Neural 3D Scene Point Renderer [6.320273914694594]
We propose and evaluate a neural point-based graphics method that can model semi-transparent scene parts.
We show that novel views of semi-transparent point cloud scenes can be generated after training with our approach.
arXiv Detail & Related papers (2020-09-06T21:19:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.