ECNR: Efficient Compressive Neural Representation of Time-Varying
Volumetric Datasets
- URL: http://arxiv.org/abs/2311.12831v4
- Date: Sat, 9 Mar 2024 16:15:38 GMT
- Title: ECNR: Efficient Compressive Neural Representation of Time-Varying
Volumetric Datasets
- Authors: Kaiyuan Tang and Chaoli Wang
- Abstract summary: compressive neural representation has emerged as a promising alternative to traditional compression methods for managing massive datasets.
This paper presents an efficient neural representation (ECNR) solution for time-varying data compression.
We show the effectiveness of ECNR with multiple datasets and compare it with state-of-the-art compression methods.
- Score: 6.3492793442257085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to its conceptual simplicity and generality, compressive neural
representation has emerged as a promising alternative to traditional
compression methods for managing massive volumetric datasets. The current
practice of neural compression utilizes a single large multilayer perceptron
(MLP) to encode the global volume, incurring slow training and inference. This
paper presents an efficient compressive neural representation (ECNR) solution
for time-varying data compression, utilizing the Laplacian pyramid for adaptive
signal fitting. Following a multiscale structure, we leverage multiple small
MLPs at each scale for fitting local content or residual blocks. By assigning
similar blocks to the same MLP via size uniformization, we enable balanced
parallelization among MLPs to significantly speed up training and inference.
Working in concert with the multiscale structure, we tailor a deep compression
strategy to compact the resulting model. We show the effectiveness of ECNR with
multiple datasets and compare it with state-of-the-art compression methods
(mainly SZ3, TTHRESH, and neurcomp). The results position ECNR as a promising
solution for volumetric data compression.
Related papers
- Accelerating Distributed Deep Learning using Lossless Homomorphic
Compression [17.654138014999326]
We introduce a novel compression algorithm that effectively merges worker-level compression with in-network aggregation.
We show up to a 6.33$times$ improvement in aggregation throughput and a 3.74$times$ increase in per-iteration training speed.
arXiv Detail & Related papers (2024-02-12T09:57:47Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Modality-Agnostic Variational Compression of Implicit Neural
Representations [96.35492043867104]
We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR)
Bridging the gap between latent coding and sparsity, we obtain compact latent representations non-linearly mapped to a soft gating mechanism.
After obtaining a dataset of such latent representations, we directly optimise the rate/distortion trade-off in a modality-agnostic space using neural compression.
arXiv Detail & Related papers (2023-01-23T15:22:42Z) - A Low-Complexity Approach to Rate-Distortion Optimized Variable Bit-Rate
Compression for Split DNN Computing [5.3221129103999125]
Split computing has emerged as a recent paradigm for implementation of DNN-based AI workloads.
We present an approach that addresses the challenge of optimizing the rate-accuracy-complexity trade-off.
Our approach is remarkably lightweight, both during training and inference, highly effective and achieves excellent rate-distortion performance.
arXiv Detail & Related papers (2022-08-24T15:02:11Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Communication-Efficient Federated Learning via Quantized Compressed
Sensing [82.10695943017907]
The presented framework consists of gradient compression for wireless devices and gradient reconstruction for a parameter server.
Thanks to gradient sparsification and quantization, our strategy can achieve a higher compression ratio than one-bit gradient compression.
We demonstrate that the framework achieves almost identical performance with the case that performs no compression.
arXiv Detail & Related papers (2021-11-30T02:13:54Z) - Sparse Tensor-based Multiscale Representation for Point Cloud Geometry
Compression [18.24902526033056]
We develop a unified Point Cloud Geometry (PCG) compression method through Sparse Processing (STP) based multiscale representation of voxelized PCG.
Applying the complexity reduces the complexity significantly because it only performs the convolutions centered at Most-Probable Positively-Occupied Voxels (MP-POV)
The proposed method presents lightweight complexity due to point-wise, and tiny storage desire because of model sharing across all scales.
arXiv Detail & Related papers (2021-11-20T17:02:45Z) - Orthogonal Features-based EEG Signal Denoising using Fractionally
Compressed AutoEncoder [16.889633963766858]
A fractional-based compressed auto-encoder architecture has been introduced to solve the problem of denoising electroencephalogram (EEG) signals.
The proposed architecture provides improved denoising results on the standard datasets when compared with the existing methods.
arXiv Detail & Related papers (2021-02-16T11:15:00Z) - An Efficient Statistical-based Gradient Compression Technique for
Distributed Training Systems [77.88178159830905]
Sparsity-Inducing Distribution-based Compression (SIDCo) is a threshold-based sparsification scheme that enjoys similar threshold estimation quality to deep gradient compression (DGC)
Our evaluation shows SIDCo speeds up training by up to 41:7%, 7:6%, and 1:9% compared to the no-compression baseline, Topk, and DGC compressors, respectively.
arXiv Detail & Related papers (2021-01-26T13:06:00Z) - PowerGossip: Practical Low-Rank Communication Compression in
Decentralized Deep Learning [62.440827696638664]
We introduce a simple algorithm that directly compresses the model differences between neighboring workers.
Inspired by the PowerSGD for centralized deep learning, this algorithm uses power steps to maximize the information transferred per bit.
arXiv Detail & Related papers (2020-08-04T09:14:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.