FoVolNet: Fast Volume Rendering using Foveated Deep Neural Networks
- URL: http://arxiv.org/abs/2209.09965v1
- Date: Tue, 20 Sep 2022 19:48:56 GMT
- Title: FoVolNet: Fast Volume Rendering using Foveated Deep Neural Networks
- Authors: David Bauer and Qi Wu and Kwan-Liu Ma
- Abstract summary: FoVolNet is a method to significantly increase the performance of volume data visualization.
We develop a cost-effective foveated rendering pipeline that sparsely samples a volume around a focal point and reconstructs the full-frame using a deep neural network.
- Score: 33.489890950757975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Volume data is found in many important scientific and engineering
applications. Rendering this data for visualization at high quality and
interactive rates for demanding applications such as virtual reality is still
not easily achievable even using professional-grade hardware. We introduce
FoVolNet -- a method to significantly increase the performance of volume data
visualization. We develop a cost-effective foveated rendering pipeline that
sparsely samples a volume around a focal point and reconstructs the full-frame
using a deep neural network. Foveated rendering is a technique that prioritizes
rendering computations around the user's focal point. This approach leverages
properties of the human visual system, thereby saving computational resources
when rendering data in the periphery of the user's field of vision. Our
reconstruction network combines direct and kernel prediction methods to produce
fast, stable, and perceptually convincing output. With a slim design and the
use of quantization, our method outperforms state-of-the-art neural
reconstruction techniques in both end-to-end frame times and visual quality. We
conduct extensive evaluations of the system's rendering performance, inference
speed, and perceptual properties, and we provide comparisons to competing
neural image reconstruction techniques. Our test results show that FoVolNet
consistently achieves significant time saving over conventional rendering while
preserving perceptual quality.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video [53.83936023443193]
This paper contributes to the field by introducing a new synthesis method for dynamic novel view from monocular video, such as smartphone captures.
Our approach represents the as a $textitdynamic neural point cloud$, an implicit time-conditioned point cloud that encodes local geometry and appearance in separate hash-encoded neural feature grids.
arXiv Detail & Related papers (2024-06-14T14:35:44Z) - VQ-NeRF: Vector Quantization Enhances Implicit Neural Representations [25.88881764546414]
VQ-NeRF is an efficient pipeline for enhancing implicit neural representations via vector quantization.
We present an innovative multi-scale NeRF sampling scheme that concurrently optimize the NeRF model at both compressed and original scales.
We incorporate a semantic loss function to improve the geometric fidelity and semantic coherence of our 3D reconstructions.
arXiv Detail & Related papers (2023-10-23T01:41:38Z) - Distributed Neural Representation for Reactive in situ Visualization [23.80657290203846]
Implicit neural representations (INRs) have emerged as a powerful tool for compressing large-scale volume data.
We develop a distributed neural representation and optimize it for in situ visualization.
Our technique eliminates data exchanges between processes, achieving state-of-the-art compression speed, quality and ratios.
arXiv Detail & Related papers (2023-03-28T03:55:47Z) - Spatio-Temporal Recurrent Networks for Event-Based Optical Flow
Estimation [47.984368369734995]
We introduce a novel recurrent encoding-decoding neural network architecture for event-based optical flow estimation.
The network is end-to-end trained with self-supervised learning on the Multi-Vehicle Stereo Event Camera dataset.
We have shown that it outperforms all the existing state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2021-09-10T13:37:37Z) - NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor
Multi-view Stereo [97.07453889070574]
We present a new multi-view depth estimation method that utilizes both conventional SfM reconstruction and learning-based priors.
We show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes.
arXiv Detail & Related papers (2021-09-02T17:54:31Z) - Deep Direct Volume Rendering: Learning Visual Feature Mappings From
Exemplary Images [57.253447453301796]
We introduce Deep Direct Volume Rendering (DeepDVR), a generalization of Direct Volume Rendering (DVR) that allows for the integration of deep neural networks into the DVR algorithm.
We conceptualize the rendering in a latent color space, thus enabling the use of deep architectures to learn implicit mappings for feature extraction and classification.
Our generalization serves to derive novel volume rendering architectures that can be trained end-to-end directly from examples in image space.
arXiv Detail & Related papers (2021-06-09T23:03:00Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z) - Monocular Real-Time Volumetric Performance Capture [28.481131687883256]
We present the first approach to volumetric performance capture and novel-view rendering at real-time speed from monocular video.
Our system reconstructs a fully textured 3D human from each frame by leveraging Pixel-Aligned Implicit Function (PIFu)
We also introduce an Online Hard Example Mining (OHEM) technique that effectively suppresses failure modes due to the rare occurrence of challenging examples.
arXiv Detail & Related papers (2020-07-28T04:45:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.