Volumetric Rendering with Baked Quadrature Fields
- URL: http://arxiv.org/abs/2312.02202v2
- Date: Wed, 10 Jul 2024 06:27:00 GMT
- Title: Volumetric Rendering with Baked Quadrature Fields
- Authors: Gopal Sharma, Daniel Rebain, Kwang Moo Yi, Andrea Tagliasacchi,
- Abstract summary: We propose a novel representation for non-opaque scenes that enables fast inference by utilizing textured polygons.
Our method allows an easy integration with existing graphics frameworks allowing rendering speed of over 100 frames-per-second for a $1920times1080$ image.
- Score: 34.280932843055446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel Neural Radiance Field (NeRF) representation for non-opaque scenes that enables fast inference by utilizing textured polygons. Despite the high-quality novel view rendering that NeRF provides, a critical limitation is that it relies on volume rendering that can be computationally expensive and does not utilize the advancements in modern graphics hardware. Many existing methods fall short when it comes to modelling volumetric effects as they rely purely on surface rendering. We thus propose to model the scene with polygons, which can then be used to obtain the quadrature points required to model volumetric effects, and also their opacity and colour from the texture. To obtain such polygonal mesh, we train a specialized field whose zero-crossings would correspond to the quadrature points when volume rendering, and perform marching cubes on this field. We then perform ray-tracing and utilize the ray-tracing shader to obtain the final colour image. Our method allows an easy integration with existing graphics frameworks allowing rendering speed of over 100 frames-per-second for a $1920\times1080$ image, while still being able to represent non-opaque objects.
Related papers
- EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis [72.53316783628803]
We present Exact Volumetric Ellipsoid Rendering (EVER), a method for real-time differentiable emission-only volume rendering.
Unlike recentization based approach by 3D Gaussian Splatting (3DGS), our primitive based representation allows for exact volume rendering.
We show that our method is more accurate with blending issues than 3DGS and follow-up work on view rendering.
arXiv Detail & Related papers (2024-10-02T17:59:09Z) - 3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes [50.36933474990516]
This work considers ray tracing the particles, building a bounding volume hierarchy and casting a ray for each pixel using high-performance ray tracing hardware.
To efficiently handle large numbers of semi-transparent particles, we describe a specialized algorithm which encapsulates particles with bounding meshes.
Experiments demonstrate the speed and accuracy of our approach, as well as several applications in computer graphics and vision.
arXiv Detail & Related papers (2024-07-09T17:59:30Z) - TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering [6.142272540492937]
We present TRIPS (Trilinear Splatting), an approach that combines ideas from both Gaussian Splatting and ADOP.
Our evaluation demonstrate that TRIPS surpasses existing state-of-the-art methods in terms of rendering quality.
This performance extends to challenging scenarios, such as scenes featuring intricate geometry, expansive landscapes, and auto-exposed footage.
arXiv Detail & Related papers (2024-01-11T16:06:36Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - Adaptive Shells for Efficient Neural Radiance Field Rendering [92.18962730460842]
We propose a neural radiance formulation that smoothly transitions between- and surface-based rendering.
Our approach enables efficient rendering at very high fidelity.
We also demonstrate that the extracted envelope enables downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-16T18:58:55Z) - Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering [84.37776381343662]
Mip-NeRF proposes a multiscale representation as a conical frustum to encode scale information.
We propose mip voxel grids (Mip-VoG), an explicit multiscale representation for real-time anti-aliasing rendering.
Our approach is the first to offer multiscale training and real-time anti-aliasing rendering simultaneously.
arXiv Detail & Related papers (2023-04-20T04:05:22Z) - MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient
Neural Field Rendering on Mobile Architectures [22.557877400262402]
Radiance Fields (NeRFs) have demonstrated amazing ability to synthesize images of 3D scenes from novel views.
They rely upon specialized rendering algorithms based on ray marching that are mismatched to the capabilities of widely deployed graphics hardware.
This paper introduces a new NeRF representation based on textured polygons that can synthesize novel images efficiently with standard pipelines.
arXiv Detail & Related papers (2022-07-30T17:14:14Z) - TermiNeRF: Ray Termination Prediction for Efficient Neural Rendering [18.254077751772005]
Volume rendering using neural fields has shown great promise in capturing and synthesizing novel views of 3D scenes.
This type of approach requires querying the volume network at multiple points along each viewing ray in order to render an image, resulting in very slow rendering times.
We present a method that overcomes this limitation by learning a direct mapping from camera rays to locations along the ray that are most likely to influence the pixel's final appearance.
arXiv Detail & Related papers (2021-11-05T17:50:44Z) - Baking Neural Radiance Fields for Real-Time View Synthesis [41.07052395570522]
We present a method to train a NeRF, then precompute and store (i.e. "bake") it as a novel representation called a Sparse Neural Radiance Grid (SNeRG)
The resulting scene representation retains NeRF's ability to render fine geometric details and view-dependent appearance, is compact, and can be rendered in real-time.
arXiv Detail & Related papers (2021-03-26T17:59:52Z) - Neural Sparse Voxel Fields [151.20366604586403]
We introduce Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering.
NSVF defines a set of voxel-bounded implicit fields organized in a sparse voxel octree to model local properties in each cell.
Our method is typically over 10 times faster than the state-of-the-art (namely, NeRF(Mildenhall et al., 2020)) at inference time while achieving higher quality results.
arXiv Detail & Related papers (2020-07-22T17:51:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.