iFlow: Numerically Invertible Flows for Efficient Lossless Compression
via a Uniform Coder
- URL: http://arxiv.org/abs/2111.00965v1
- Date: Mon, 1 Nov 2021 14:15:58 GMT
- Title: iFlow: Numerically Invertible Flows for Efficient Lossless Compression
via a Uniform Coder
- Authors: Shifeng Zhang, Ning Kang, Tom Ryder and Zhenguo Li
- Abstract summary: iFlow is a new method for achieving efficient lossless compression.
iFlow achieves state-of-the-art compression ratios and is $5times$ quicker than other high-performance schemes.
- Score: 38.297114268193
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It was estimated that the world produced $59 ZB$ ($5.9 \times 10^{13} GB$) of
data in 2020, resulting in the enormous costs of both data storage and
transmission. Fortunately, recent advances in deep generative models have
spearheaded a new class of so-called "neural compression" algorithms, which
significantly outperform traditional codecs in terms of compression ratio.
Unfortunately, the application of neural compression garners little commercial
interest due to its limited bandwidth; therefore, developing highly efficient
frameworks is of critical practical importance. In this paper, we discuss
lossless compression using normalizing flows which have demonstrated a great
capacity for achieving high compression ratios. As such, we introduce iFlow, a
new method for achieving efficient lossless compression. We first propose
Modular Scale Transform (MST) and a novel family of numerically invertible flow
transformations based on MST. Then we introduce the Uniform Base Conversion
System (UBCS), a fast uniform-distribution codec incorporated into iFlow,
enabling efficient compression. iFlow achieves state-of-the-art compression
ratios and is $5\times$ quicker than other high-performance schemes.
Furthermore, the techniques presented in this paper can be used to accelerate
coding time for a broad class of flow-based algorithms.
Related papers
- Ultra Dual-Path Compression For Joint Echo Cancellation And Noise
Suppression [38.09558772881095]
Under fixed compression ratios, dual-path compression combining both the time and frequency methods will give further performance improvement.
Proposed models show competitive performance compared with fast FullSubNet and DeepNetFilter.
arXiv Detail & Related papers (2023-08-21T21:36:56Z) - DiffRate : Differentiable Compression Rate for Efficient Vision
Transformers [98.33906104846386]
Token compression aims to speed up large-scale vision transformers (e.g. ViTs) by pruning (dropping) or merging tokens.
DiffRate is a novel token compression method that has several appealing properties prior arts do not have.
arXiv Detail & Related papers (2023-05-29T10:15:19Z) - Unrolled Compressed Blind-Deconvolution [77.88847247301682]
sparse multichannel blind deconvolution (S-MBD) arises frequently in many engineering applications such as radar/sonar/ultrasound imaging.
We propose a compression method that enables blind recovery from much fewer measurements with respect to the full received signal in time.
arXiv Detail & Related papers (2022-09-28T15:16:58Z) - Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression [85.93207826513192]
We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
arXiv Detail & Related papers (2022-09-11T12:11:56Z) - Lossless Compression with Probabilistic Circuits [42.377045986733776]
Probabilistic Circuits (PCs) are a class of neural networks involving $|p|$ computational units.
We derive efficient encoding and decoding schemes that both have time complexity $mathcalO (log(D) cdot |p|)$, where a naive scheme would have linear costs in $D$ and $|p|$.
By scaling up the traditional PC structure learning pipeline, we achieved state-of-the-art results on image datasets such as MNIST.
arXiv Detail & Related papers (2021-11-23T03:30:22Z) - ANFIC: Image Compression Using Augmented Normalizing Flows [16.161901495436233]
This paper introduces an end-to-end learned image compression system, termed ANFIC, based on Augmented Normalizing Flows (ANF)
In terms of PSNR-RGB, ANFIC performs comparably to or better than the state-of-the-art learned image compression.
In particular, ANFIC achieves the state-of-the-art performance, when extended with conditional convolution for variable rate compression with a single model.
arXiv Detail & Related papers (2021-07-18T15:02:31Z) - Towards Compact CNNs via Collaborative Compression [166.86915086497433]
We propose a Collaborative Compression scheme, which joints channel pruning and tensor decomposition to compress CNN models.
We achieve 52.9% FLOPs reduction by removing 48.4% parameters on ResNet-50 with only a Top-1 accuracy drop of 0.56% on ImageNet 2012.
arXiv Detail & Related papers (2021-05-24T12:07:38Z) - iVPF: Numerical Invertible Volume Preserving Flow for Efficient Lossless
Compression [21.983560104199622]
It is nontrivial to store rapidly growing big data nowadays, which demands high-performance compression techniques.
We propose Numerical Invertible Volume Preserving Flow (iVPF) computation which is derived from the general volume preserving flows.
Experiments on various datasets show that the algorithm based on iVPF achieves state-of-the-art compression ratio over lightweight compression algorithms.
arXiv Detail & Related papers (2021-03-30T09:50:58Z) - PowerGossip: Practical Low-Rank Communication Compression in
Decentralized Deep Learning [62.440827696638664]
We introduce a simple algorithm that directly compresses the model differences between neighboring workers.
Inspired by the PowerSGD for centralized deep learning, this algorithm uses power steps to maximize the information transferred per bit.
arXiv Detail & Related papers (2020-08-04T09:14:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.