Efficient Neural Representation of Volumetric Data using
Coordinate-Based Networks
- URL: http://arxiv.org/abs/2401.08840v1
- Date: Tue, 16 Jan 2024 21:33:01 GMT
- Title: Efficient Neural Representation of Volumetric Data using
Coordinate-Based Networks
- Authors: Sudarshan Devkota, Sumanta Pattanaik
- Abstract summary: We propose an efficient approach for the compression and representation of volumetric data using coordinate-based networks and hash encoding.
Our approach enables effective compression by learning a mapping between spatial coordinates and intensity values.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose an efficient approach for the compression and
representation of volumetric data utilizing coordinate-based networks and
multi-resolution hash encoding. Efficient compression of volumetric data is
crucial for various applications, such as medical imaging and scientific
simulations. Our approach enables effective compression by learning a mapping
between spatial coordinates and intensity values. We compare different encoding
schemes and demonstrate the superiority of multi-resolution hash encoding in
terms of compression quality and training efficiency. Furthermore, we leverage
optimization-based meta-learning, specifically using the Reptile algorithm, to
learn weight initialization for neural representations tailored to volumetric
data, enabling faster convergence during optimization. Additionally, we compare
our approach with state-of-the-art methods to showcase improved image quality
and compression ratios. These findings highlight the potential of
coordinate-based networks and multi-resolution hash encoding for an efficient
and accurate representation of volumetric data, paving the way for advancements
in large-scale data visualization and other applications.
Related papers
- Joint Hierarchical Priors and Adaptive Spatial Resolution for Efficient
Neural Image Compression [11.25130799452367]
We propose an absolute image compression transformer (ICT) for neural image compression (NIC)
ICT captures both global and local contexts from the latent representations and better parameterize the distribution of the quantized latents.
Our framework significantly improves the trade-off between coding efficiency and decoder complexity over the versatile video coding (VVC) reference encoder (VTM-18.0) and the neural SwinT-ChARM.
arXiv Detail & Related papers (2023-07-05T13:17:14Z) - Compression with Bayesian Implicit Neural Representations [16.593537431810237]
We propose overfitting variational neural networks to the data and compressing an approximate posterior weight sample using relative entropy coding instead of quantizing and entropy coding it.
Experiments show that our method achieves strong performance on image and audio compression while retaining simplicity.
arXiv Detail & Related papers (2023-05-30T16:29:52Z) - Distributed Neural Representation for Reactive in situ Visualization [23.80657290203846]
Implicit neural representations (INRs) have emerged as a powerful tool for compressing large-scale volume data.
We develop a distributed neural representation and optimize it for in situ visualization.
Our technique eliminates data exchanges between processes, achieving state-of-the-art compression speed, quality and ratios.
arXiv Detail & Related papers (2023-03-28T03:55:47Z) - Neural Data-Dependent Transform for Learned Image Compression [72.86505042102155]
We build a neural data-dependent transform and introduce a continuous online mode decision mechanism to jointly optimize the coding efficiency for each individual image.
The experimental results show the effectiveness of the proposed neural-syntax design and the continuous online mode decision mechanism.
arXiv Detail & Related papers (2022-03-09T14:56:48Z) - COIN++: Data Agnostic Neural Compression [55.27113889737545]
COIN++ is a neural compression framework that seamlessly handles a wide range of data modalities.
We demonstrate the effectiveness of our method by compressing various data modalities.
arXiv Detail & Related papers (2022-01-30T20:12:04Z) - Variable-Rate Deep Image Compression through Spatially-Adaptive Feature
Transform [58.60004238261117]
We propose a versatile deep image compression network based on Spatial Feature Transform (SFT arXiv:1804.02815)
Our model covers a wide range of compression rates using a single model, which is controlled by arbitrary pixel-wise quality maps.
The proposed framework allows us to perform task-aware image compressions for various tasks.
arXiv Detail & Related papers (2021-08-21T17:30:06Z) - Deep data compression for approximate ultrasonic image formation [1.0266286487433585]
In ultrasonic imaging systems, data acquisition and image formation are performed on separate computing devices.
Deep neural networks are optimized to preserve the image quality of a particular image formation method.
arXiv Detail & Related papers (2020-09-04T16:43:12Z) - Towards Analysis-friendly Face Representation with Scalable Feature and
Texture Compression [113.30411004622508]
We show that a universal and collaborative visual information representation can be achieved in a hierarchical way.
Based on the strong generative capability of deep neural networks, the gap between the base feature layer and enhancement layer is further filled with the feature level texture reconstruction.
To improve the efficiency of the proposed framework, the base layer neural network is trained in a multi-task manner.
arXiv Detail & Related papers (2020-04-21T14:32:49Z) - Understanding the Effects of Data Parallelism and Sparsity on Neural
Network Training [126.49572353148262]
We study two factors in neural network training: data parallelism and sparsity.
Despite their promising benefits, understanding of their effects on neural network training remains elusive.
arXiv Detail & Related papers (2020-03-25T10:49:22Z) - End-to-End Facial Deep Learning Feature Compression with Teacher-Student
Enhancement [57.18801093608717]
We propose a novel end-to-end feature compression scheme by leveraging the representation and learning capability of deep neural networks.
In particular, the extracted features are compactly coded in an end-to-end manner by optimizing the rate-distortion cost.
We verify the effectiveness of the proposed model with the facial feature, and experimental results reveal better compression performance in terms of rate-accuracy.
arXiv Detail & Related papers (2020-02-10T10:08:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.