COIN: COmpression with Implicit Neural representations
- URL: http://arxiv.org/abs/2103.03123v1
- Date: Wed, 3 Mar 2021 10:58:39 GMT
- Title: COIN: COmpression with Implicit Neural representations
- Authors: Emilien Dupont, Adam Goli\'nski, Milad Alizadeh, Yee Whye Teh, Arnaud
Doucet
- Abstract summary: We propose a new simple approach for image compression.
Instead of storing the RGB values for each pixel of an image, we store the weights of a neural network overfitted to the image.
- Score: 64.02694714768691
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new simple approach for image compression: instead of storing
the RGB values for each pixel of an image, we store the weights of a neural
network overfitted to the image. Specifically, to encode an image, we fit it
with an MLP which maps pixel locations to RGB values. We then quantize and
store the weights of this MLP as a code for the image. To decode the image, we
simply evaluate the MLP at every pixel location. We found that this simple
approach outperforms JPEG at low bit-rates, even without entropy coding or
learning a distribution over weights. While our framework is not yet
competitive with state of the art compression methods, we show that it has
various attractive properties which could make it a viable alternative to other
neural data compression approaches.
Related papers
- Transformer based Pluralistic Image Completion with Reduced Information Loss [72.92754600354199]
Transformer based methods have achieved great success in image inpainting recently.
They regard each pixel as a token, thus suffering from an information loss issue.
We propose a new transformer based framework called "PUT"
arXiv Detail & Related papers (2024-03-31T01:20:16Z) - Exploring the Limits of Semantic Image Compression at Micro-bits per
Pixel [8.518076792914039]
We use GPT-4V and DALL-E3 from OpenAI to explore the quality-compression frontier for image compression.
We push semantic compression as low as 100 $mu$bpp (up to $10,000times$ smaller than JPEG) by introducing an iterative reflection process.
We further hypothesize this 100 $mu$bpp level represents a soft limit on semantic compression at standard image resolutions.
arXiv Detail & Related papers (2024-02-21T05:14:30Z) - You Can Mask More For Extremely Low-Bitrate Image Compression [80.7692466922499]
Learned image compression (LIC) methods have experienced significant progress during recent years.
LIC methods fail to explicitly explore the image structure and texture components crucial for image compression.
We present DA-Mask that samples visible patches based on the structure and texture of original images.
We propose a simple yet effective masked compression model (MCM), the first framework that unifies LIC and LIC end-to-end for extremely low-bitrate compression.
arXiv Detail & Related papers (2023-06-27T15:36:22Z) - GamutMLP: A Lightweight MLP for Color Loss Recovery [40.273821032576606]
GamutMLP takes approximately 2 seconds to optimize and requires only 23 KB of storage.
We demonstrate the effectiveness of our approach for color recovery and compare it with alternative strategies.
As part of this effort, we introduce a new color gamut dataset of 2200 wide-gamut/small-gamut images for training and testing.
arXiv Detail & Related papers (2023-04-23T20:26:11Z) - CoordFill: Efficient High-Resolution Image Inpainting via Parameterized
Coordinate Querying [52.91778151771145]
In this paper, we try to break the limitations for the first time thanks to the recent development of continuous implicit representation.
Experiments show that the proposed method achieves real-time performance on the 2048$times$2048 images using a single GTX 2080 Ti GPU.
arXiv Detail & Related papers (2023-03-15T11:13:51Z) - Hyperspectral Image Compression Using Implicit Neural Representation [1.4721615285883425]
This paper develops a method for hyperspectral image compression using implicit neural representations.
We show the proposed method achieves better compression than JPEG, JPEG2000, and PCA-DCT at lows.
arXiv Detail & Related papers (2023-02-08T15:27:00Z) - SINCO: A Novel structural regularizer for image compression using
implicit neural representations [10.251120382395332]
Implicit neural representations (INR) have been recently proposed as deep learning (DL) based solutions for image compression.
We present structural regularization for INR compression (SINCO) as a novel INR method for image compression.
arXiv Detail & Related papers (2022-10-26T18:35:54Z) - PS-NeRV: Patch-wise Stylized Neural Representations for Videos [13.14511356472246]
PS-NeRV represents videos as a function of patches and the corresponding patch coordinate.
It naturally inherits the advantages of image-wise methods, and achieves excellent reconstruction performance with fast decoding speed.
arXiv Detail & Related papers (2022-08-07T14:45:30Z) - RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for
Image Recognition [123.59890802196797]
We propose RepMLP, a multi-layer-perceptron-style neural network building block for image recognition.
We construct convolutional layers inside a RepMLP during training and merge them into the FC for inference.
By inserting RepMLP in traditional CNN, we improve ResNets by 1.8% accuracy on ImageNet, 2.9% for face recognition, and 2.3% mIoU on Cityscapes with lower FLOPs.
arXiv Detail & Related papers (2021-05-05T06:17:40Z) - Discernible Image Compression [124.08063151879173]
This paper aims to produce compressed images by pursuing both appearance and perceptual consistency.
Based on the encoder-decoder framework, we propose using a pre-trained CNN to extract features of the original and compressed images.
Experiments on benchmarks demonstrate that images compressed by using the proposed method can also be well recognized by subsequent visual recognition and detection models.
arXiv Detail & Related papers (2020-02-17T07:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.