Dynamic Sampling Rate: Harnessing Frame Coherence in Graphics
Applications for Energy-Efficient GPUs
- URL: http://arxiv.org/abs/2202.10533v1
- Date: Mon, 21 Feb 2022 21:15:14 GMT
- Title: Dynamic Sampling Rate: Harnessing Frame Coherence in Graphics
Applications for Energy-Efficient GPUs
- Authors: Mart\'i Anglada, Enrique de Lucas, Joan-Manuel Parcerisa, Juan L.
Arag\'on and Antonio Gonz\'alez
- Abstract summary: This work proposes Dynamic Sampling Rate (DSR), a novel hardware mechanism to reduce redundancy and improve the energy efficiency in graphics applications.
We evaluate the performance of a state-of-the-art mobile GPU architecture extended with DSR for a wide variety of applications.
- Score: 1.0433988610452742
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In real-time rendering, a 3D scene is modelled with meshes of triangles that
the GPU projects to the screen. They are discretized by sampling each triangle
at regular space intervals to generate fragments which are then added texture
and lighting effects by a shader program. Realistic scenes require detailed
geometric models, complex shaders, high-resolution displays and high screen
refreshing rates, which all come at a great compute time and energy cost. This
cost is often dominated by the fragment shader, which runs for each sampled
fragment. Conventional GPUs sample the triangles once per pixel, however, there
are many screen regions containing low variation that produce identical
fragments and could be sampled at lower than pixel-rate with no loss in
quality. Additionally, as temporal frame coherence makes consecutive frames
very similar, such variations are usually maintained from frame to frame. This
work proposes Dynamic Sampling Rate (DSR), a novel hardware mechanism to reduce
redundancy and improve the energy efficiency in graphics applications. DSR
analyzes the spatial frequencies of the scene once it has been rendered. Then,
it leverages the temporal coherence in consecutive frames to decide, for each
region of the screen, the lowest sampling rate to employ in the next frame that
maintains image quality. We evaluate the performance of a state-of-the-art
mobile GPU architecture extended with DSR for a wide variety of applications.
Experimental results show that DSR is able to remove most of the redundancy
inherent in the color computations at fragment granularity, which brings
average speedups of 1.68x and energy savings of 40%.
Related papers
- Temporally Compressed 3D Gaussian Splatting for Dynamic Scenes [46.64784407920817]
Temporally Compressed 3D Gaussian Splatting (TC3DGS) is a novel technique designed specifically to compress dynamic 3D Gaussian representations.
Our experiments across multiple datasets demonstrate that TC3DGS achieves up to 67$times$ compression with minimal or no degradation in visual quality.
arXiv Detail & Related papers (2024-12-07T17:03:09Z) - 3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes [87.01284850604495]
We introduce 3D Convexting (3DCS), which leverages 3D smooth convexes as primitives for modeling geometrically-meaningful radiance fields from multiview images.
3DCS achieves superior performance over 3DGS on benchmarks such as MipNeizer, Tanks and Temples, and Deep Blending.
Our results highlight the potential of 3D Convexting to become the new standard for high-quality scene reconstruction.
arXiv Detail & Related papers (2024-11-22T14:31:39Z) - EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis [72.53316783628803]
We present Exact Volumetric Ellipsoid Rendering (EVER), a method for real-time differentiable emission-only volume rendering.
Unlike recentization based approach by 3D Gaussian Splatting (3DGS), our primitive based representation allows for exact volume rendering.
We show that our method is more accurate with blending issues than 3DGS and follow-up work on view rendering.
arXiv Detail & Related papers (2024-10-02T17:59:09Z) - 3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes [50.36933474990516]
This work considers ray tracing the particles, building a bounding volume hierarchy and casting a ray for each pixel using high-performance ray tracing hardware.
To efficiently handle large numbers of semi-transparent particles, we describe a specialized algorithm which encapsulates particles with bounding meshes.
Experiments demonstrate the speed and accuracy of our approach, as well as several applications in computer graphics and vision.
arXiv Detail & Related papers (2024-07-09T17:59:30Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - Compressed 3D Gaussian Splatting for Accelerated Novel View Synthesis [0.552480439325792]
High-fidelity scene reconstruction with an optimized 3D Gaussian splat representation has been introduced for novel view synthesis from sparse image sets.
We propose a compressed 3D Gaussian splat representation that utilizes sensitivity-aware vector clustering with quantization-aware training to compress directional colors and Gaussian parameters.
arXiv Detail & Related papers (2023-11-17T14:40:43Z) - EvaSurf: Efficient View-Aware Implicit Textured Surface Reconstruction on Mobile Devices [53.28220984270622]
We present an implicit textured $textbfSurf$ace reconstruction method on mobile devices.
Our method can reconstruct high-quality appearance and accurate mesh on both synthetic and real-world datasets.
Our method can be trained in just 1-2 hours using a single GPU and run on mobile devices at over 40 FPS (Frames Per Second)
arXiv Detail & Related papers (2023-11-16T11:30:56Z) - 3D Gaussian Splatting for Real-Time Radiance Field Rendering [4.320393382724066]
We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times.
We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.
arXiv Detail & Related papers (2023-08-08T06:37:06Z) - NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed
Neural Radiance Fields [99.57774680640581]
We present an efficient framework capable of fast reconstruction, compact modeling, and streamable rendering.
We propose to decompose the 4D space according to temporal characteristics. Points in the 4D space are associated with probabilities belonging to three categories: static, deforming, and new areas.
arXiv Detail & Related papers (2022-10-28T07:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.