VR-Pipe: Streamlining Hardware Graphics Pipeline for Volume Rendering
- URL: http://arxiv.org/abs/2502.17078v1
- Date: Mon, 24 Feb 2025 11:46:36 GMT
- Title: VR-Pipe: Streamlining Hardware Graphics Pipeline for Volume Rendering
- Authors: Junseo Lee, Jaisung Kim, Junyong Park, Jaewoong Sim,
- Abstract summary: We implement the state-of-the-art radiance field method, 3D Gaussian splatting, using graphics APIs and evaluate it across synthetic and real-world scenes on today's graphics hardware.<n>We present VR-Pipe, which seamlessly integrates two innovations into graphics hardware to streamline the hardware pipeline for volume rendering.<n>Our evaluation shows that VR-Pipe greatly improves rendering performance, achieving up to a 2.78x speedup over the conventional graphics pipeline with negligible hardware overhead.
- Score: 1.9470707535768061
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graphics rendering that builds on machine learning and radiance fields is gaining significant attention due to its outstanding quality and speed in generating photorealistic images from novel viewpoints. However, prior work has primarily focused on evaluating its performance through software-based rendering on programmable shader cores, leaving its performance when exploiting fixed-function graphics units largely unexplored. In this paper, we investigate the performance implications of performing radiance field rendering on the hardware graphics pipeline. In doing so, we implement the state-of-the-art radiance field method, 3D Gaussian splatting, using graphics APIs and evaluate it across synthetic and real-world scenes on today's graphics hardware. Based on our analysis, we present VR-Pipe, which seamlessly integrates two innovations into graphics hardware to streamline the hardware pipeline for volume rendering, such as radiance field methods. First, we introduce native hardware support for early termination by repurposing existing special-purpose hardware in modern GPUs. Second, we propose multi-granular tile binning with quad merging, which opportunistically blends fragments in shader cores before passing them to fixed-function blending units. Our evaluation shows that VR-Pipe greatly improves rendering performance, achieving up to a 2.78x speedup over the conventional graphics pipeline with negligible hardware overhead.
Related papers
- Hardware-Rasterized Ray-Based Gaussian Splatting [21.088504768655618]
We present a novel, hardwareized rendering approach for ray-based 3D Gaussian Splatting (RayGS)
Our solution is the first enabling rendering RayGS models at sufficiently high frame rates to support quality-sensitive applications like Virtual and Mixed Reality.
Our second contribution enables alias-free rendering for RayGS, by addressing MIP-related issues during training and testing.
arXiv Detail & Related papers (2025-03-24T13:53:30Z) - VR-Splatting: Foveated Radiance Field Rendering via 3D Gaussian Splatting and Neural Points [4.962171160815189]
High-performance demands of virtual reality systems present challenges in utilizing fast-to-render scene representations like 3DGS.
We propose foveated rendering as a promising solution to these obstacles.
Our approach introduces a novel foveated rendering approach for Virtual Reality, that leverages the sharp, detailed output of neural point rendering for the foveal region, fused with a smooth rendering of 3DGS for the peripheral vision.
arXiv Detail & Related papers (2024-10-23T14:54:48Z) - 3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes [50.36933474990516]
This work considers ray tracing the particles, building a bounding volume hierarchy and casting a ray for each pixel using high-performance ray tracing hardware.
To efficiently handle large numbers of semi-transparent particles, we describe a specialized algorithm which encapsulates particles with bounding meshes.
Experiments demonstrate the speed and accuracy of our approach, as well as several applications in computer graphics and vision.
arXiv Detail & Related papers (2024-07-09T17:59:30Z) - FaceFolds: Meshed Radiance Manifolds for Efficient Volumetric Rendering of Dynamic Faces [21.946327323788275]
3D rendering of dynamic face is a challenging problem.
We present a novel representation that enables high-quality rendering of an actor's dynamic facial performances.
arXiv Detail & Related papers (2024-04-22T00:44:13Z) - Feature 3DGS: Supercharging 3D Gaussian Splatting to Enable Distilled Feature Fields [54.482261428543985]
Methods that use Neural Radiance fields are versatile for traditional tasks such as novel view synthesis.
3D Gaussian splatting has shown state-of-the-art performance on real-time radiance field rendering.
We propose architectural and training changes to efficiently avert this problem.
arXiv Detail & Related papers (2023-12-06T00:46:30Z) - VideoRF: Rendering Dynamic Radiance Fields as 2D Feature Video Streams [56.00479598817949]
VideoRF is the first approach to enable real-time streaming and rendering of dynamic radiance fields on mobile platforms.
We show that the feature image stream can be efficiently compressed by 2D video codecs.
We have developed a real-time interactive player that enables online streaming and rendering of dynamic scenes.
arXiv Detail & Related papers (2023-12-03T14:14:35Z) - EvaSurf: Efficient View-Aware Implicit Textured Surface Reconstruction [53.28220984270622]
3D reconstruction methods should generate high-fidelity results with 3D consistency in real-time.<n>Our method can reconstruct high-quality appearance and accurate mesh on both synthetic and real-world datasets.<n>Our method can be trained in just 1-2 hours using a single GPU and run on mobile devices at over 40 FPS (Frames Per Second)
arXiv Detail & Related papers (2023-11-16T11:30:56Z) - Real-Time Radiance Fields for Single-Image Portrait View Synthesis [85.32826349697972]
We present a one-shot method to infer and render a 3D representation from a single unposed image in real-time.
Given a single RGB input, our image encoder directly predicts a canonical triplane representation of a neural radiance field for 3D-aware novel view synthesis via volume rendering.
Our method is fast (24 fps) on consumer hardware, and produces higher quality results than strong GAN-inversion baselines that require test-time optimization.
arXiv Detail & Related papers (2023-05-03T17:56:01Z) - Dressi: A Hardware-Agnostic Differentiable Renderer with Reactive Shader
Packing and Soft Rasterization [6.443504994276216]
Differentiable rendering (DR) enables various computer graphics and computer vision applications through gradient-based optimization with derivatives of the rendering equation.
Mostization-based approaches are built on general-purpose automatic differentiation (AD) libraries and DR-specific modules handcrafted using runtime.
We present a practical hardware-agnostic differentiable called Dressi, which is based on a new full AD design.
arXiv Detail & Related papers (2022-04-04T11:07:03Z) - Faster than FAST: GPU-Accelerated Frontend for High-Speed VIO [46.20949184826173]
This work focuses on the applicability of efficient low-level, GPU hardware-specific instructions to improve on existing computer vision algorithms.
Especially non-maxima suppression and the subsequent feature selection are prominent contributors to the overall image processing latency.
arXiv Detail & Related papers (2020-03-30T14:16:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.