Deep Hierarchical Super-Resolution for Scientific Data Reduction and
Visualization
- URL: http://arxiv.org/abs/2107.00462v1
- Date: Sun, 30 May 2021 18:32:11 GMT
- Title: Deep Hierarchical Super-Resolution for Scientific Data Reduction and
Visualization
- Authors: Skylar W. Wurster, Han-Wei Shen, Hanqi Guo, Thomas Peterka, Mukund
Raj, and Jiayi Xu
- Abstract summary: We present an approach for hierarchical super resolution (SR) using neural networks on an octree data representation.
We train a hierarchy of neural networks, each capable of 2x upscaling in each spatial dimension between two levels of detail.
We utilize these networks in a hierarchical super resolution algorithm that upscales multiresolution data to a uniform high resolution without introducing seam artifacts on octree node boundaries.
- Score: 11.095493889344317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an approach for hierarchical super resolution (SR) using neural
networks on an octree data representation. We train a hierarchy of neural
networks, each capable of 2x upscaling in each spatial dimension between two
levels of detail, and use these networks in tandem to facilitate large scale
factor super resolution, scaling with the number of trained networks. We
utilize these networks in a hierarchical super resolution algorithm that
upscales multiresolution data to a uniform high resolution without introducing
seam artifacts on octree node boundaries. We evaluate application of this
algorithm in a data reduction framework by dynamically downscaling input data
to an octree-based data structure to represent the multiresolution data before
compressing for additional storage reduction. We demonstrate that our approach
avoids seam artifacts common to multiresolution data formats, and show how
neural network super resolution assisted data reduction can preserve global
features better than compressors alone at the same compression ratios.
Related papers
- Decoupling Fine Detail and Global Geometry for Compressed Depth Map Super-Resolution [55.9977636042469]
Bit-depth compression produces a uniform depth representation in regions with subtle variations, hindering the recovery of detailed information.
densely distributed random noise reduces the accuracy of estimating the global geometric structure of the scene.
We propose a novel framework, termed geometry-decoupled network (GDNet), for compressed depth map super-resolution.
arXiv Detail & Related papers (2024-11-05T16:37:30Z) - Improved distinct bone segmentation in upper-body CT through
multi-resolution networks [0.39583175274885335]
In distinct bone segmentation from upper body CTs a large field of view and a computationally taxing 3D architecture are required.
This leads to low-resolution results lacking detail or localisation errors due to missing spatial context.
We propose end-to-end trainable segmentation networks that combine several 3D U-Nets working at different resolutions.
arXiv Detail & Related papers (2023-01-31T14:46:16Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - Pyramid Grafting Network for One-Stage High Resolution Saliency
Detection [29.013012579688347]
We propose a one-stage framework called Pyramid Grafting Network (PGNet) to extract features from different resolution images independently.
An attention-based Cross-Model Grafting Module (CMGM) is proposed to enable CNN branch to combine broken detailed information more holistically.
We contribute a new Ultra-High-Resolution Saliency Detection dataset UHRSD, containing 5,920 images at 4K-8K resolutions.
arXiv Detail & Related papers (2022-04-11T12:22:21Z) - ME-CapsNet: A Multi-Enhanced Capsule Networks with Routing Mechanism [0.0]
This research focuses on bringing in a novel solution that uses sophisticated optimization for enhancing both the spatial and channel components inside each layer's receptive field.
We have proposed ME-CapsNet by introducing deeper convolutional layers to extract important features before passing through modules of capsule layers strategically.
The deeper convolutional layer includes blocks of Squeeze-Excitation networks which use a sampling approach for reconstructing their interdependencies without much loss of important feature information.
arXiv Detail & Related papers (2022-03-29T13:29:38Z) - Deep Networks for Image and Video Super-Resolution [30.75380029218373]
Single image super-resolution (SISR) is built using efficient convolutional units we refer to as mixed-dense connection blocks (MDCB)
We train two versions of our network to enhance complementary image qualities using different loss configurations.
We further employ our network for super-resolution task, where our network learns to aggregate information from multiple frames and maintain-temporal consistency.
arXiv Detail & Related papers (2022-01-28T09:15:21Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z) - OverNet: Lightweight Multi-Scale Super-Resolution with Overscaling
Network [3.6683231417848283]
We introduce OverNet, a deep but lightweight convolutional network to solve SISR at arbitrary scale factors with a single model.
We show that our network outperforms previous state-of-the-art results in standard benchmarks while using fewer parameters than previous approaches.
arXiv Detail & Related papers (2020-08-05T22:10:29Z) - Cross-Attention in Coupled Unmixing Nets for Unsupervised Hyperspectral
Super-Resolution [79.97180849505294]
We propose a novel coupled unmixing network with a cross-attention mechanism, CUCaNet, to enhance the spatial resolution of HSI.
Experiments are conducted on three widely-used HS-MS datasets in comparison with state-of-the-art HSI-SR models.
arXiv Detail & Related papers (2020-07-10T08:08:20Z) - DC-NAS: Divide-and-Conquer Neural Architecture Search [108.57785531758076]
We present a divide-and-conquer (DC) approach to effectively and efficiently search deep neural architectures.
We achieve a $75.1%$ top-1 accuracy on the ImageNet dataset, which is higher than that of state-of-the-art methods using the same search space.
arXiv Detail & Related papers (2020-05-29T09:02:16Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.