Learning Adaptive Sampling and Reconstruction for Volume Visualization
- URL: http://arxiv.org/abs/2007.10093v1
- Date: Mon, 20 Jul 2020 13:36:54 GMT
- Title: Learning Adaptive Sampling and Reconstruction for Volume Visualization
- Authors: Sebastian Weiss, Mustafa I\c{s}{\i}k, Justus Thies, R\"udiger
Westermann
- Abstract summary: A central challenge in data visualization is to understand which data samples are required to generate an image of a data set in which the relevant information is encoded.
In this work, we make a first step towards answering the question of whether an artificial neural network can predict where to sample the data with higher or lower density.
We introduce a novel neural rendering pipeline, which is trained end-to-end to generate a sparse adaptive sampling structure from a given low-resolution input image.
- Score: 13.595857406165294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A central challenge in data visualization is to understand which data samples
are required to generate an image of a data set in which the relevant
information is encoded. In this work, we make a first step towards answering
the question of whether an artificial neural network can predict where to
sample the data with higher or lower density, by learning of correspondences
between the data, the sampling patterns and the generated images. We introduce
a novel neural rendering pipeline, which is trained end-to-end to generate a
sparse adaptive sampling structure from a given low-resolution input image, and
reconstructs a high-resolution image from the sparse set of samples. For the
first time, to the best of our knowledge, we demonstrate that the selection of
structures that are relevant for the final visual representation can be jointly
learned together with the reconstruction of this representation from these
structures. Therefore, we introduce differentiable sampling and reconstruction
stages, which can leverage back-propagation based on supervised losses solely
on the final image. We shed light on the adaptive sampling patterns generated
by the network pipeline and analyze its use for volume visualization including
isosurface and direct volume rendering.
Related papers
- Re-Visible Dual-Domain Self-Supervised Deep Unfolding Network for MRI Reconstruction [48.30341580103962]
We propose a novel re-visible dual-domain self-supervised deep unfolding network to address these issues.
We design a deep unfolding network based on Chambolle and Pock Proximal Point Algorithm (DUN-CP-PPA) to achieve end-to-end reconstruction.
Experiments conducted on the fastMRI and IXI datasets demonstrate that our method significantly outperforms state-of-the-art approaches in terms of reconstruction performance.
arXiv Detail & Related papers (2025-01-07T12:29:32Z) - AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance
Fields [8.214695794896127]
Novel view synthesis has recently been revolutionized by learning neural radiance fields directly from sparse observations.
rendering images with this new paradigm is slow due to the fact that an accurate quadrature of the volume rendering equation requires a large number of samples for each ray.
We propose a novel dual-network architecture that takes an direction by learning how to best reduce the number of required sample points.
arXiv Detail & Related papers (2022-07-21T05:59:13Z) - PUERT: Probabilistic Under-sampling and Explicable Reconstruction
Network for CS-MRI [47.24613772568027]
Compressed Sensing MRI aims at reconstructing de-aliased images from sub-Nyquist sampling k-space data to accelerate MR Imaging.
We propose a novel end-to-end Probabilistic Under-sampling and Explicable Reconstruction neTwork, dubbed PUERT, to jointly optimize the sampling pattern and the reconstruction network.
Experiments on two widely used MRI datasets demonstrate that our proposed PUERT achieves state-of-the-art results in terms of both quantitative metrics and visual quality.
arXiv Detail & Related papers (2022-04-24T04:23:57Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Single-pass Object-adaptive Data Undersampling and Reconstruction for
MRI [6.599344783327054]
We propose a data-driven sampler using a convolutional neural network, MNet, to provide object-specific sampling patterns adaptive to each scanned object.
The network observes very limited low-frequency k-space data for each object and rapidly predicts the desired undersampling pattern.
Experimental results on the fastMRI knee dataset demonstrate the ability of the proposed learned undersampling network to generate object-specific masks at fourfold and eightfold acceleration.
arXiv Detail & Related papers (2021-11-17T16:06:06Z) - Conditional Variational Autoencoder for Learned Image Reconstruction [5.487951901731039]
We develop a novel framework that approximates the posterior distribution of the unknown image at each query observation.
It handles implicit noise models and priors, it incorporates the data formation process (i.e., the forward operator), and the learned reconstructive properties are transferable between different datasets.
arXiv Detail & Related papers (2021-10-22T10:02:48Z) - Self-supervised Audiovisual Representation Learning for Remote Sensing Data [96.23611272637943]
We propose a self-supervised approach for pre-training deep neural networks in remote sensing.
By exploiting the correspondence between geo-tagged audio recordings and remote sensing, this is done in a completely label-free manner.
We show that our approach outperforms existing pre-training strategies for remote sensing imagery.
arXiv Detail & Related papers (2021-08-02T07:50:50Z) - Neural BRDF Representation and Importance Sampling [79.84316447473873]
We present a compact neural network-based representation of reflectance BRDF data.
We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling.
We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real-world datasets.
arXiv Detail & Related papers (2021-02-11T12:00:24Z) - Representation Learning for Sequence Data with Deep Autoencoding
Predictive Components [96.42805872177067]
We propose a self-supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space.
We encourage this latent structure by maximizing an estimate of predictive information of latent feature sequences, which is the mutual information between past and future windows at each time step.
We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data.
arXiv Detail & Related papers (2020-10-07T03:34:01Z) - Set Based Stochastic Subsampling [85.5331107565578]
We propose a set-based two-stage end-to-end neural subsampling model that is jointly optimized with an textitarbitrary downstream task network.
We show that it outperforms the relevant baselines under low subsampling rates on a variety of tasks including image classification, image reconstruction, function reconstruction and few-shot classification.
arXiv Detail & Related papers (2020-06-25T07:36:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.