HyperINR: A Fast and Predictive Hypernetwork for Implicit Neural
Representations via Knowledge Distillation
- URL: http://arxiv.org/abs/2304.04188v1
- Date: Sun, 9 Apr 2023 08:10:10 GMT
- Title: HyperINR: A Fast and Predictive Hypernetwork for Implicit Neural
Representations via Knowledge Distillation
- Authors: Qi Wu, David Bauer, Yuyang Chen, Kwan-Liu Ma
- Abstract summary: Implicit Neural Representations (INRs) have recently exhibited immense potential in the field of scientific visualization.
In this paper, we introduce HyperINR, a novel hypernetwork architecture capable of directly predicting the weights for a compact INR.
By harnessing an ensemble of multiresolution hash encoding units in unison, the resulting INR attains state-of-the-art inference performance.
- Score: 31.44962361819199
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Implicit Neural Representations (INRs) have recently exhibited immense
potential in the field of scientific visualization for both data generation and
visualization tasks. However, these representations often consist of large
multi-layer perceptrons (MLPs), necessitating millions of operations for a
single forward pass, consequently hindering interactive visual exploration.
While reducing the size of the MLPs and employing efficient parametric encoding
schemes can alleviate this issue, it compromises generalizability for unseen
parameters, rendering it unsuitable for tasks such as temporal
super-resolution. In this paper, we introduce HyperINR, a novel hypernetwork
architecture capable of directly predicting the weights for a compact INR. By
harnessing an ensemble of multiresolution hash encoding units in unison, the
resulting INR attains state-of-the-art inference performance (up to 100x higher
inference bandwidth) and can support interactive photo-realistic volume
visualization. Additionally, by incorporating knowledge distillation,
exceptional data and visualization generation quality is achieved, making our
method valuable for real-time parameter exploration. We validate the
effectiveness of the HyperINR architecture through a comprehensive ablation
study. We showcase the versatility of HyperINR across three distinct scientific
domains: novel view synthesis, temporal super-resolution of volume data, and
volume rendering with dynamic global shadows. By simultaneously achieving
efficiency and generalizability, HyperINR paves the way for applying INR in a
wider array of scientific visualization applications.
Related papers
- Memory-efficient High-resolution OCT Volume Synthesis with Cascaded Amortized Latent Diffusion Models [48.87160158792048]
We introduce a cascaded amortized latent diffusion model (CA-LDM) that can synthesis high-resolution OCT volumes in a memory-efficient way.
Experiments on a public high-resolution OCT dataset show that our synthetic data have realistic high-resolution and global features, surpassing the capabilities of existing methods.
arXiv Detail & Related papers (2024-05-26T10:58:22Z) - Hybrid Convolutional and Attention Network for Hyperspectral Image Denoising [54.110544509099526]
Hyperspectral image (HSI) denoising is critical for the effective analysis and interpretation of hyperspectral data.
We propose a hybrid convolution and attention network (HCANet) to enhance HSI denoising.
Experimental results on mainstream HSI datasets demonstrate the rationality and effectiveness of the proposed HCANet.
arXiv Detail & Related papers (2024-03-15T07:18:43Z) - ADASR: An Adversarial Auto-Augmentation Framework for Hyperspectral and
Multispectral Data Fusion [54.668445421149364]
Deep learning-based hyperspectral image (HSI) super-resolution aims to generate high spatial resolution HSI (HR-HSI) by fusing hyperspectral image (HSI) and multispectral image (MSI) with deep neural networks (DNNs)
In this letter, we propose a novel adversarial automatic data augmentation framework ADASR that automatically optimize and augments HSI-MSI sample pairs to enrich data diversity for HSI-MSI fusion.
arXiv Detail & Related papers (2023-10-11T07:30:37Z) - Multi-Depth Branch Network for Efficient Image Super-Resolution [12.042706918188566]
A longstanding challenge in Super-Resolution (SR) is how to efficiently enhance high-frequency details in Low-Resolution (LR) images.
We propose an asymmetric SR architecture featuring Multi-Depth Branch Module (MDBM)
MDBMs contain branches of different depths, designed to capture high- and low-frequency information simultaneously and efficiently.
arXiv Detail & Related papers (2023-09-29T15:46:25Z) - FFEINR: Flow Feature-Enhanced Implicit Neural Representation for
Spatio-temporal Super-Resolution [4.577685231084759]
This paper proposes a Feature-Enhanced Neural Implicit Representation (FFEINR) for super-resolution of flow field data.
It can take full advantage of the implicit neural representation in terms of model structure and sampling resolution.
The training process of FFEINR is facilitated by introducing feature enhancements for the input layer.
arXiv Detail & Related papers (2023-08-24T02:28:18Z) - ESSAformer: Efficient Transformer for Hyperspectral Image
Super-resolution [76.7408734079706]
Single hyperspectral image super-resolution (single-HSI-SR) aims to restore a high-resolution hyperspectral image from a low-resolution observation.
We propose ESSAformer, an ESSA attention-embedded Transformer network for single-HSI-SR with an iterative refining structure.
arXiv Detail & Related papers (2023-07-26T07:45:14Z) - Distributed Neural Representation for Reactive in situ Visualization [23.80657290203846]
Implicit neural representations (INRs) have emerged as a powerful tool for compressing large-scale volume data.
We develop a distributed neural representation and optimize it for in situ visualization.
Our technique eliminates data exchanges between processes, achieving state-of-the-art compression speed, quality and ratios.
arXiv Detail & Related papers (2023-03-28T03:55:47Z) - Implicit Neural Representation Learning for Hyperspectral Image
Super-Resolution [0.0]
Implicit Neural Representations (INRs) are making strides as a novel and effective representation.
We propose a novel HSI reconstruction model based on INR which represents HSI by a continuous function mapping a spatial coordinate to its corresponding spectral radiance values.
arXiv Detail & Related papers (2021-12-20T14:07:54Z) - Parameterized Hypercomplex Graph Neural Networks for Graph
Classification [1.1852406625172216]
We develop graph neural networks that leverage the properties of hypercomplex feature transformation.
In particular, in our proposed class of models, the multiplication rule specifying the algebra itself is inferred from the data during training.
We test our proposed hypercomplex GNN on several open graph benchmark datasets and show that our models reach state-of-the-art performance.
arXiv Detail & Related papers (2021-03-30T18:01:06Z) - Neural BRDF Representation and Importance Sampling [79.84316447473873]
We present a compact neural network-based representation of reflectance BRDF data.
We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling.
We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real-world datasets.
arXiv Detail & Related papers (2021-02-11T12:00:24Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.