Interactive Volume Visualization via Multi-Resolution Hash Encoding
based Neural Representation
- URL: http://arxiv.org/abs/2207.11620v3
- Date: Thu, 29 Jun 2023 20:35:50 GMT
- Title: Interactive Volume Visualization via Multi-Resolution Hash Encoding
based Neural Representation
- Authors: Qi Wu, David Bauer, Michael J. Doyle, Kwan-Liu Ma
- Abstract summary: We show that we can interactively ray trace volumetric neural representations (10-60fps) using modern GPU cores and a well-designed rendering algorithm.
Our neural representations are also high-fidelity teracell (PSNR > 30dB) and compact (10-1000x smaller)
To support extreme-scale volume data, we also develop an efficient out-of-core training strategy, which allows our neural representation training to potentially scale up to terascale.
- Score: 29.797933404619606
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks have shown great potential in compressing volume data for
visualization. However, due to the high cost of training and inference, such
volumetric neural representations have thus far only been applied to offline
data processing and non-interactive rendering. In this paper, we demonstrate
that by simultaneously leveraging modern GPU tensor cores, a native CUDA neural
network framework, and a well-designed rendering algorithm with macro-cell
acceleration, we can interactively ray trace volumetric neural representations
(10-60fps). Our neural representations are also high-fidelity (PSNR > 30dB) and
compact (10-1000x smaller). Additionally, we show that it is possible to fit
the entire training step inside a rendering loop and skip the pre-training
process completely. To support extreme-scale volume data, we also develop an
efficient out-of-core training strategy, which allows our volumetric neural
representation training to potentially scale up to terascale using only an
NVIDIA RTX 3090 workstation.
Related papers
- N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - Sophisticated deep learning with on-chip optical diffractive tensor
processing [5.081061839052458]
Photonic integrated circuits provide an efficient approach to mitigate bandwidth limitations and power-wall brought by electronic counterparts.
We propose an optical computing architecture enabled by on-chip diffraction to implement convolutional acceleration, termed optical convolution unit (OCU)
With OCU as the fundamental unit, we build an optical convolutional neural network (oCNN) to implement two popular deep learning tasks: classification and regression.
arXiv Detail & Related papers (2022-12-20T03:33:26Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction [50.248694764703714]
Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
arXiv Detail & Related papers (2022-07-18T06:01:29Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Instant Neural Graphics Primitives with a Multiresolution Hash Encoding [67.33850633281803]
We present a versatile new input encoding that permits the use of a smaller network without sacrificing quality.
A small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through a gradient descent.
We achieve a combined speed of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds.
arXiv Detail & Related papers (2022-01-16T07:22:47Z) - AutoInt: Automatic Integration for Fast Neural Volume Rendering [51.46232518888791]
We propose a new framework for learning efficient, closed-form solutions to integrals using implicit neural representation networks.
We demonstrate a greater than 10x improvement in photorealistic requirements, enabling fast neural volume rendering.
arXiv Detail & Related papers (2020-12-03T05:46:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.