COIN++: Data Agnostic Neural Compression
- URL: http://arxiv.org/abs/2201.12904v1
- Date: Sun, 30 Jan 2022 20:12:04 GMT
- Title: COIN++: Data Agnostic Neural Compression
- Authors: Emilien Dupont, Hrushikesh Loya, Milad Alizadeh, Adam Goli\'nski, Yee
Whye Teh, Arnaud Doucet
- Abstract summary: COIN++ is a neural compression framework that seamlessly handles a wide range of data modalities.
We demonstrate the effectiveness of our method by compressing various data modalities.
- Score: 55.27113889737545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural compression algorithms are typically based on autoencoders that
require specialized encoder and decoder architectures for different data
modalities. In this paper, we propose COIN++, a neural compression framework
that seamlessly handles a wide range of data modalities. Our approach is based
on converting data to implicit neural representations, i.e. neural functions
that map coordinates (such as pixel locations) to features (such as RGB
values). Then, instead of storing the weights of the implicit neural
representation directly, we store modulations applied to a meta-learned base
network as a compressed code for the data. We further quantize and entropy code
these modulations, leading to large compression gains while reducing encoding
time by two orders of magnitude compared to baselines. We empirically
demonstrate the effectiveness of our method by compressing various data
modalities, from images to medical and climate data.
Related papers
- Compression of Structured Data with Autoencoders: Provable Benefit of
Nonlinearities and Depth [83.15263499262824]
We prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.
We show how to improve upon Gaussian performance for the compression of sparse data by adding a denoising function to a shallow architecture.
We validate our findings on image datasets, such as CIFAR-10 and MNIST.
arXiv Detail & Related papers (2024-02-07T16:32:29Z) - Neural-based Compression Scheme for Solar Image Data [8.374518151411612]
We propose a neural network-based lossy compression method to be used in NASA's data-intensive imagery missions.
In this work, we propose an adversarially trained neural network, equipped with local and non-local attention modules to capture both the local and global structure of the image.
As a proof of concept for use of this algorithm in SDO data analysis, we have performed coronal hole (CH) detection using our compressed images.
arXiv Detail & Related papers (2023-11-06T04:13:58Z) - Compression with Bayesian Implicit Neural Representations [16.593537431810237]
We propose overfitting variational neural networks to the data and compressing an approximate posterior weight sample using relative entropy coding instead of quantizing and entropy coding it.
Experiments show that our method achieves strong performance on image and audio compression while retaining simplicity.
arXiv Detail & Related papers (2023-05-30T16:29:52Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Training and Generating Neural Networks in Compressed Weight Space [9.952319575163607]
Indirect encodings or end-to-end compression of weight matrices could help to scale such approaches.
Our goal is to open a discussion on this topic, starting with recurrent neural networks for character-level language modelling.
arXiv Detail & Related papers (2021-12-31T16:50:31Z) - Implicit Neural Video Compression [17.873088127087605]
We propose a method to compress full-resolution video sequences with implicit neural representations.
Each frame is represented as a neural network that maps coordinate positions to pixel values.
We use a separate implicit network to modulate the coordinate inputs, which enables efficient motion compensation between frames.
arXiv Detail & Related papers (2021-12-21T15:59:00Z) - Dynamic Neural Representational Decoders for High-Resolution Semantic
Segmentation [98.05643473345474]
We propose a novel decoder, termed dynamic neural representational decoder (NRD)
As each location on the encoder's output corresponds to a local patch of the semantic labels, in this work, we represent these local patches of labels with compact neural networks.
This neural representation enables our decoder to leverage the smoothness prior in the semantic label space, and thus makes our decoder more efficient.
arXiv Detail & Related papers (2021-07-30T04:50:56Z) - Permute, Quantize, and Fine-tune: Efficient Compression of Neural
Networks [70.0243910593064]
Key to success of vector quantization is deciding which parameter groups should be compressed together.
In this paper we make the observation that the weights of two adjacent layers can be permuted while expressing the same function.
We then establish a connection to rate-distortion theory and search for permutations that result in networks that are easier to compress.
arXiv Detail & Related papers (2020-10-29T15:47:26Z) - Unfolding Neural Networks for Compressive Multichannel Blind
Deconvolution [71.29848468762789]
We propose a learned-structured unfolding neural network for the problem of compressive sparse multichannel blind-deconvolution.
In this problem, each channel's measurements are given as convolution of a common source signal and sparse filter.
We demonstrate that our method is superior to classical structured compressive sparse multichannel blind-deconvolution methods in terms of accuracy and speed of sparse filter recovery.
arXiv Detail & Related papers (2020-10-22T02:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.