Lossless Compression with Probabilistic Circuits
- URL: http://arxiv.org/abs/2111.11632v1
- Date: Tue, 23 Nov 2021 03:30:22 GMT
- Title: Lossless Compression with Probabilistic Circuits
- Authors: Anji Liu and Stephan Mandt and Guy Van den Broeck
- Abstract summary: Probabilistic Circuits (PCs) are a class of neural networks involving $|p|$ computational units.
We derive efficient encoding and decoding schemes that both have time complexity $mathcalO (log(D) cdot |p|)$, where a naive scheme would have linear costs in $D$ and $|p|$.
By scaling up the traditional PC structure learning pipeline, we achieved state-of-the-art results on image datasets such as MNIST.
- Score: 42.377045986733776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite extensive progress on image generation, deep generative models are
suboptimal when applied to lossless compression. For example, models such as
VAEs suffer from a compression cost overhead due to their latent variables that
can only be partially eliminated with elaborated schemes such as bits-back
coding, resulting in oftentimes poor single-sample compression rates. To
overcome such problems, we establish a new class of tractable lossless
compression models that permit efficient encoding and decoding: Probabilistic
Circuits (PCs). These are a class of neural networks involving $|p|$
computational units that support efficient marginalization over arbitrary
subsets of the $D$ feature dimensions, enabling efficient arithmetic coding. We
derive efficient encoding and decoding schemes that both have time complexity
$\mathcal{O} (\log(D) \cdot |p|)$, where a naive scheme would have linear costs
in $D$ and $|p|$, making the approach highly scalable. Empirically, our
PC-based (de)compression algorithm runs 5-20x faster than neural compression
algorithms that achieve similar bitrates. By scaling up the traditional PC
structure learning pipeline, we achieved state-of-the-art results on image
datasets such as MNIST. Furthermore, PCs can be naturally integrated with
existing neural compression algorithms to improve the performance of these base
models on natural image datasets. Our results highlight the potential impact
that non-standard learning architectures may have on neural data compression.
Related papers
- "Lossless" Compression of Deep Neural Networks: A High-dimensional
Neural Tangent Kernel Approach [49.744093838327615]
We provide a novel compression approach to wide and fully-connected emphdeep neural nets.
Experiments on both synthetic and real-world data are conducted to support the advantages of the proposed compression scheme.
arXiv Detail & Related papers (2024-03-01T03:46:28Z) - Compression of Structured Data with Autoencoders: Provable Benefit of
Nonlinearities and Depth [83.15263499262824]
We prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.
We show how to improve upon Gaussian performance for the compression of sparse data by adding a denoising function to a shallow architecture.
We validate our findings on image datasets, such as CIFAR-10 and MNIST.
arXiv Detail & Related papers (2024-02-07T16:32:29Z) - Compression with Bayesian Implicit Neural Representations [16.593537431810237]
We propose overfitting variational neural networks to the data and compressing an approximate posterior weight sample using relative entropy coding instead of quantizing and entropy coding it.
Experiments show that our method achieves strong performance on image and audio compression while retaining simplicity.
arXiv Detail & Related papers (2023-05-30T16:29:52Z) - Computationally-Efficient Neural Image Compression with Shallow Decoders [43.115831685920114]
This paper takes a step forward towards closing the gap in decoding complexity by using a shallow or even linear decoding transform resembling that of JPEG.
We exploit the often asymmetrical budget between encoding and decoding, by adopting more powerful encoder networks and iterative encoding.
arXiv Detail & Related papers (2023-04-13T03:38:56Z) - Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression [85.93207826513192]
We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
arXiv Detail & Related papers (2022-09-11T12:11:56Z) - Learning sparse auto-encoders for green AI image coding [5.967279020820772]
In this paper, we address the problem of lossy image compression using a CAE with a small memory footprint and low computational power usage.
We propose a constrained approach and a new structured sparse learning method.
Experimental results show that the $ell_1,1$ constraint provides the best structured proximal sparsity, resulting in a high reduction of memory and computational cost.
arXiv Detail & Related papers (2022-09-09T06:31:46Z) - COIN++: Data Agnostic Neural Compression [55.27113889737545]
COIN++ is a neural compression framework that seamlessly handles a wide range of data modalities.
We demonstrate the effectiveness of our method by compressing various data modalities.
arXiv Detail & Related papers (2022-01-30T20:12:04Z) - iFlow: Numerically Invertible Flows for Efficient Lossless Compression
via a Uniform Coder [38.297114268193]
iFlow is a new method for achieving efficient lossless compression.
iFlow achieves state-of-the-art compression ratios and is $5times$ quicker than other high-performance schemes.
arXiv Detail & Related papers (2021-11-01T14:15:58Z) - Learning Scalable $\ell_\infty$-constrained Near-lossless Image
Compression via Joint Lossy Image and Residual Compression [118.89112502350177]
We propose a novel framework for learning $ell_infty$-constrained near-lossless image compression.
We derive the probability model of the quantized residual by quantizing the learned probability model of the original residual.
arXiv Detail & Related papers (2021-03-31T11:53:36Z) - A flexible, extensible software framework for model compression based on
the LC algorithm [10.787390511207683]
We propose a software framework that allows a user to compress a neural network or other machine learning model with minimal effort.
The library is written in Python and PyTorch and available in Github.
arXiv Detail & Related papers (2020-05-15T21:14:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.