Substitutional Neural Image Compression
- URL: http://arxiv.org/abs/2105.07512v1
- Date: Sun, 16 May 2021 20:53:31 GMT
- Title: Substitutional Neural Image Compression
- Authors: Xiao Wang, Wei Jiang, Wei Wang, Shan Liu, Brian Kulis, Peter Chin
- Abstract summary: Substitutional Neural Image Compression (SNIC) is a general approach for enhancing any neural image compression model.
It boosts compression performance toward a flexible distortion metric and enables bit-rate control using a single model instance.
- Score: 48.20906717052056
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We describe Substitutional Neural Image Compression (SNIC), a general
approach for enhancing any neural image compression model, that requires no
data or additional tuning of the trained model. It boosts compression
performance toward a flexible distortion metric and enables bit-rate control
using a single model instance. The key idea is to replace the image to be
compressed with a substitutional one that outperforms the original one in a
desired way. Finding such a substitute is inherently difficult for conventional
codecs, yet surprisingly favorable for neural compression models thanks to
their fully differentiable structures. With gradients of a particular loss
backpropogated to the input, a desired substitute can be efficiently crafted
iteratively. We demonstrate the effectiveness of SNIC, when combined with
various neural compression models and target metrics, in improving compression
quality and performing bit-rate control measured by rate-distortion curves.
Empirical results of control precision and generation speed are also discussed.
Related papers
- Once-for-All: Controllable Generative Image Compression with Dynamic Granularity Adaption [57.056311855630916]
We propose a Controllable Generative Image Compression framework, Control-GIC.
It is capable of fine-grained adaption across a broad spectrum while ensuring high-fidelity and generality compression.
We develop a conditional conditionalization that can trace back to historic encoded multi-granularity representations.
arXiv Detail & Related papers (2024-06-02T14:22:09Z) - A Rate-Distortion-Classification Approach for Lossy Image Compression [0.0]
In lossy image compression, the objective is to achieve minimal signal distortion while compressing images to a specified bit rate.
To bridge the gap between image compression and visual analysis, we propose a Rate-Distortion-Classification (RDC) model for lossy image compression.
arXiv Detail & Related papers (2024-05-06T14:11:36Z) - Modality-Agnostic Variational Compression of Implicit Neural
Representations [96.35492043867104]
We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR)
Bridging the gap between latent coding and sparsity, we obtain compact latent representations non-linearly mapped to a soft gating mechanism.
After obtaining a dataset of such latent representations, we directly optimise the rate/distortion trade-off in a modality-agnostic space using neural compression.
arXiv Detail & Related papers (2023-01-23T15:22:42Z) - High-Fidelity Variable-Rate Image Compression via Invertible Activation
Transformation [24.379052026260034]
We propose the Invertible Activation Transformation (IAT) module to tackle the issue of high-fidelity fine variable-rate image compression.
IAT and QLevel together give the image compression model the ability of fine variable-rate control while better maintaining the image fidelity.
Our method outperforms the state-of-the-art variable-rate image compression method by a large margin, especially after multiple re-encodings.
arXiv Detail & Related papers (2022-09-12T07:14:07Z) - Estimating the Resize Parameter in End-to-end Learned Image Compression [50.20567320015102]
We describe a search-free resizing framework that can further improve the rate-distortion tradeoff of recent learned image compression models.
Our results show that our new resizing parameter estimation framework can provide Bjontegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines.
arXiv Detail & Related papers (2022-04-26T01:35:02Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - Towards Compact CNNs via Collaborative Compression [166.86915086497433]
We propose a Collaborative Compression scheme, which joints channel pruning and tensor decomposition to compress CNN models.
We achieve 52.9% FLOPs reduction by removing 48.4% parameters on ResNet-50 with only a Top-1 accuracy drop of 0.56% on ImageNet 2012.
arXiv Detail & Related papers (2021-05-24T12:07:38Z) - Slimmable Compressive Autoencoders for Practical Neural Image
Compression [20.715312224456138]
We propose slimmable compressive autoencoders (SlimCAEs) for practical image compression.
SlimCAEs are highly flexible models that provide excellent rate-distortion performance, variable rate, and dynamic adjustment of memory, computational cost and latency.
arXiv Detail & Related papers (2021-03-29T16:12:04Z) - Asymmetric Gained Deep Image Compression With Continuous Rate Adaptation [12.009880944927069]
We propose a continuously rate adjustable learned image compression framework, Asymmetric Gained Variational Autoencoder (AG-VAE)
AG-VAE utilizes a pair of gain units to achieve discrete rate adaptation in one single model with a negligible additional computation.
Our method achieves comparable quantitative performance with SOTA learned image compression methods and better qualitative performance than classical image codecs.
arXiv Detail & Related papers (2020-03-04T11:42:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.