PILC: Practical Image Lossless Compression with an End-to-end GPU
Oriented Neural Framework
- URL: http://arxiv.org/abs/2206.05279v1
- Date: Fri, 10 Jun 2022 03:00:10 GMT
- Title: PILC: Practical Image Lossless Compression with an End-to-end GPU
Oriented Neural Framework
- Authors: Ning Kang, Shanzhao Qiu, Shifeng Zhang, Zhenguo Li, Shutao Xia
- Abstract summary: We propose an end-to-end image compression framework that achieves 200 MB/s for both compression and decompression with a single NVIDIA Tesla V100 GPU.
Experiments show that our framework compresses better than PNG by a margin of 30% in multiple datasets.
- Score: 88.18310777246735
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative model based image lossless compression algorithms have seen a
great success in improving compression ratio. However, the throughput for most
of them is less than 1 MB/s even with the most advanced AI accelerated chips,
preventing them from most real-world applications, which often require 100
MB/s. In this paper, we propose PILC, an end-to-end image lossless compression
framework that achieves 200 MB/s for both compression and decompression with a
single NVIDIA Tesla V100 GPU, 10 times faster than the most efficient one
before. To obtain this result, we first develop an AI codec that combines
auto-regressive model and VQ-VAE which performs well in lightweight setting,
then we design a low complexity entropy coder that works well with our codec.
Experiments show that our framework compresses better than PNG by a margin of
30% in multiple datasets. We believe this is an important step to bring AI
compression forward to commercial use.
Related papers
- CMC-Bench: Towards a New Paradigm of Visual Signal Compression [85.1839779884282]
We introduce CMC-Bench, a benchmark of the cooperative performance of Image-to-Text (I2T) and Text-to-Image (T2I) models for image compression.
At ultra-lows, this paper proves that the combination of some I2T and T2I models has surpassed the most advanced visual signal protocols.
arXiv Detail & Related papers (2024-06-13T17:41:37Z) - GaussianImage: 1000 FPS Image Representation and Compression by 2D Gaussian Splatting [27.33121386538575]
Implicit neural representations (INRs) recently achieved great success in image representation and compression.
However, this requirement often hinders their use on low-end devices with limited memory.
We propose a groundbreaking paradigm of image representation and compression by 2D Gaussian Splatting, named GaussianImage.
arXiv Detail & Related papers (2024-03-13T14:02:54Z) - MISC: Ultra-low Bitrate Image Semantic Compression Driven by Large Multimodal Model [78.4051835615796]
This paper proposes a method called Multimodal Image Semantic Compression.
It consists of an LMM encoder for extracting the semantic information of the image, a map encoder to locate the region corresponding to the semantic, an image encoder generates an extremely compressed bitstream, and a decoder reconstructs the image based on the above information.
It can achieve optimal consistency and perception results while saving perceptual 50%, which has strong potential applications in the next generation of storage and communication.
arXiv Detail & Related papers (2024-02-26T17:11:11Z) - Random-Access Neural Compression of Material Textures [1.2971248363246106]
We propose a novel neural compression technique specifically designed for material textures.
We unlock two more levels of detail, i.e., 16x more texels, using low compression.
Our method allows on-demand, real-time decompression with random access, enabling compression on disk and memory.
arXiv Detail & Related papers (2023-05-26T17:16:22Z) - Computationally-Efficient Neural Image Compression with Shallow Decoders [43.115831685920114]
This paper takes a step forward towards closing the gap in decoding complexity by using a shallow or even linear decoding transform resembling that of JPEG.
We exploit the often asymmetrical budget between encoding and decoding, by adopting more powerful encoder networks and iterative encoding.
arXiv Detail & Related papers (2023-04-13T03:38:56Z) - Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression [85.93207826513192]
We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
arXiv Detail & Related papers (2022-09-11T12:11:56Z) - Towards Compact CNNs via Collaborative Compression [166.86915086497433]
We propose a Collaborative Compression scheme, which joints channel pruning and tensor decomposition to compress CNN models.
We achieve 52.9% FLOPs reduction by removing 48.4% parameters on ResNet-50 with only a Top-1 accuracy drop of 0.56% on ImageNet 2012.
arXiv Detail & Related papers (2021-05-24T12:07:38Z) - Conditional Entropy Coding for Efficient Video Compression [82.35389813794372]
We propose a very simple and efficient video compression framework that only focuses on modeling the conditional entropy between frames.
We first show that a simple architecture modeling the entropy between the image latent codes is as competitive as other neural video compression works and video codecs.
We then propose a novel internal learning extension on top of this architecture that brings an additional 10% savings without trading off decoding speed.
arXiv Detail & Related papers (2020-08-20T20:01:59Z) - A Unified End-to-End Framework for Efficient Deep Image Compression [35.156677716140635]
We propose a unified framework called Efficient Deep Image Compression (EDIC) based on three new technologies.
Specifically, we design an auto-encoder style network for learning based image compression.
Our EDIC method can also be readily incorporated with the Deep Video Compression (DVC) framework to further improve the video compression performance.
arXiv Detail & Related papers (2020-02-09T14:21:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.