iVPF: Numerical Invertible Volume Preserving Flow for Efficient Lossless
Compression
- URL: http://arxiv.org/abs/2103.16211v1
- Date: Tue, 30 Mar 2021 09:50:58 GMT
- Title: iVPF: Numerical Invertible Volume Preserving Flow for Efficient Lossless
Compression
- Authors: Shifeng Zhang, Chen Zhang, Ning Kang and Li Zhenguo
- Abstract summary: It is nontrivial to store rapidly growing big data nowadays, which demands high-performance compression techniques.
We propose Numerical Invertible Volume Preserving Flow (iVPF) computation which is derived from the general volume preserving flows.
Experiments on various datasets show that the algorithm based on iVPF achieves state-of-the-art compression ratio over lightweight compression algorithms.
- Score: 21.983560104199622
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is nontrivial to store rapidly growing big data nowadays, which demands
high-performance lossless compression techniques. Likelihood-based generative
models have witnessed their success on lossless compression, where flow based
models are desirable in allowing exact data likelihood optimisation with
bijective mappings. However, common continuous flows are in contradiction with
the discreteness of coding schemes, which requires either 1) imposing strict
constraints on flow models that degrades the performance or 2) coding numerous
bijective mapping errors which reduces the efficiency. In this paper, we
investigate volume preserving flows for lossless compression and show that a
bijective mapping without error is possible. We propose Numerical Invertible
Volume Preserving Flow (iVPF) which is derived from the general volume
preserving flows. By introducing novel computation algorithms on flow models,
an exact bijective mapping is achieved without any numerical error. We also
propose a lossless compression algorithm based on iVPF. Experiments on various
datasets show that the algorithm based on iVPF achieves state-of-the-art
compression ratio over lightweight compression algorithms.
Related papers
- Compression of Structured Data with Autoencoders: Provable Benefit of
Nonlinearities and Depth [83.15263499262824]
We prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.
We show how to improve upon Gaussian performance for the compression of sparse data by adding a denoising function to a shallow architecture.
We validate our findings on image datasets, such as CIFAR-10 and MNIST.
arXiv Detail & Related papers (2024-02-07T16:32:29Z) - DiffRate : Differentiable Compression Rate for Efficient Vision
Transformers [98.33906104846386]
Token compression aims to speed up large-scale vision transformers (e.g. ViTs) by pruning (dropping) or merging tokens.
DiffRate is a novel token compression method that has several appealing properties prior arts do not have.
arXiv Detail & Related papers (2023-05-29T10:15:19Z) - Fundamental Limits of Two-layer Autoencoders, and Achieving Them with
Gradient Methods [91.54785981649228]
This paper focuses on non-linear two-layer autoencoders trained in the challenging proportional regime.
Our results characterize the minimizers of the population risk, and show that such minimizers are achieved by gradient methods.
For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shallow) autoencoders.
arXiv Detail & Related papers (2022-12-27T12:37:34Z) - Unrolled Compressed Blind-Deconvolution [77.88847247301682]
sparse multichannel blind deconvolution (S-MBD) arises frequently in many engineering applications such as radar/sonar/ultrasound imaging.
We propose a compression method that enables blind recovery from much fewer measurements with respect to the full received signal in time.
arXiv Detail & Related papers (2022-09-28T15:16:58Z) - Unified Multivariate Gaussian Mixture for Efficient Neural Image
Compression [151.3826781154146]
latent variables with priors and hyperpriors is an essential problem in variational image compression.
We find inter-correlations and intra-correlations exist when observing latent variables in a vectorized perspective.
Our model has better rate-distortion performance and an impressive $3.18times$ compression speed up.
arXiv Detail & Related papers (2022-03-21T11:44:17Z) - A Physics-Informed Vector Quantized Autoencoder for Data Compression of
Turbulent Flow [28.992515947961593]
We apply a physics-informed Deep Learning technique based on vector quantization to generate a low-dimensional representation of data from turbulent flows.
The accuracy of the model is assessed using statistical, comparison-based similarity and physics-based metrics.
Our model can offer CR $=85$ with a mean square error (MSE) of $O(10-3)$, and predictions that faithfully reproduce the statistics of the flow, except at the very smallest scales.
arXiv Detail & Related papers (2022-01-10T19:55:50Z) - Optimal Rate Adaption in Federated Learning with Compressed
Communications [28.16239232265479]
Federated Learning incurs high communication overhead, which can be greatly alleviated by compression for model updates.
tradeoff between compression and model accuracy in the networked environment remains unclear.
We present a framework to maximize the final model accuracy by strategically adjusting the compression each iteration.
arXiv Detail & Related papers (2021-12-13T14:26:15Z) - iFlow: Numerically Invertible Flows for Efficient Lossless Compression
via a Uniform Coder [38.297114268193]
iFlow is a new method for achieving efficient lossless compression.
iFlow achieves state-of-the-art compression ratios and is $5times$ quicker than other high-performance schemes.
arXiv Detail & Related papers (2021-11-01T14:15:58Z) - Learning Optical Flow from a Few Matches [67.83633948984954]
We show that the dense correlation volume representation is redundant and accurate flow estimation can be achieved with only a fraction of elements in it.
Experiments show that our method can reduce computational cost and memory use significantly, while maintaining high accuracy.
arXiv Detail & Related papers (2021-04-05T21:44:00Z) - IDF++: Analyzing and Improving Integer Discrete Flows for Lossless
Compression [20.162897999101716]
discrete flows are a proposed class of models that learn invertible transformations for integer-valued random variables.
We show how different architecture modifications improve the performance of this model class for lossless compression.
arXiv Detail & Related papers (2020-06-22T17:41:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.