Frequency-Aware Re-Parameterization for Over-Fitting Based Image
Compression
- URL: http://arxiv.org/abs/2310.08068v1
- Date: Thu, 12 Oct 2023 06:32:12 GMT
- Title: Frequency-Aware Re-Parameterization for Over-Fitting Based Image
Compression
- Authors: Yun Ye, Yanjie Pan, Qually Jiang, Ming Lu, Xiaoran Fang, Beryl Xu
- Abstract summary: Over-fitting-based image compression requires weights compactness for compression and fast convergence for practical use.
This paper presents a simple re- parameterization method to train CNNs with reduced weights storage and accelerated convergence.
The proposed method is verified with extensive experiments of over-fitting-based image restoration on various datasets, achieving up to -46.12% BD-rate on top of HEIF with only 200 iterations.
- Score: 12.725194101094711
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over-fitting-based image compression requires weights compactness for
compression and fast convergence for practical use, posing challenges for deep
convolutional neural networks (CNNs) based methods. This paper presents a
simple re-parameterization method to train CNNs with reduced weights storage
and accelerated convergence. The convolution kernels are re-parameterized as a
weighted sum of discrete cosine transform (DCT) kernels enabling direct
optimization in the frequency domain. Combined with L1 regularization, the
proposed method surpasses vanilla convolutions by achieving a significantly
improved rate-distortion with low computational cost. The proposed method is
verified with extensive experiments of over-fitting-based image restoration on
various datasets, achieving up to -46.12% BD-rate on top of HEIF with only 200
iterations.
Related papers
- Convolutional Neural Network Compression Based on Low-Rank Decomposition [3.3295360710329738]
This paper proposes a model compression method that integrates Variational Bayesian Matrix Factorization.
VBMF is employed to estimate the rank of the weight tensor at each layer.
Experimental results show that for both high and low compression ratios, our compression model exhibits advanced performance.
arXiv Detail & Related papers (2024-08-29T06:40:34Z) - Reduced storage direct tensor ring decomposition for convolutional neural networks compression [0.0]
We propose a novel low-rank CNNs compression method based on reduced storage direct tensor ring decomposition (RSDTR)
The proposed method offers a higher circular mode permutation flexibility, and it is characterized by large parameter and FLOPS compression rates.
Experiments, performed on the CIFAR-10 and ImageNet datasets, clearly demonstrate the efficiency of RSDTR in comparison to other state-of-the-art CNNs compression approaches.
arXiv Detail & Related papers (2024-05-17T14:16:40Z) - Compression with Bayesian Implicit Neural Representations [16.593537431810237]
We propose overfitting variational neural networks to the data and compressing an approximate posterior weight sample using relative entropy coding instead of quantizing and entropy coding it.
Experiments show that our method achieves strong performance on image and audio compression while retaining simplicity.
arXiv Detail & Related papers (2023-05-30T16:29:52Z) - Unified Multivariate Gaussian Mixture for Efficient Neural Image
Compression [151.3826781154146]
latent variables with priors and hyperpriors is an essential problem in variational image compression.
We find inter-correlations and intra-correlations exist when observing latent variables in a vectorized perspective.
Our model has better rate-distortion performance and an impressive $3.18times$ compression speed up.
arXiv Detail & Related papers (2022-03-21T11:44:17Z) - Learning True Rate-Distortion-Optimization for End-To-End Image
Compression [59.816251613869376]
Rate-distortion optimization is crucial part of traditional image and video compression.
In this paper, we enhance the training by introducing low-complexity estimations of the RDO result into the training.
We achieve average rate savings of 19.6% in MS-SSIM over the previous RDONet model, which equals rate savings of 27.3% over a comparable conventional deep image coder.
arXiv Detail & Related papers (2022-01-05T13:02:00Z) - Modeling Image Quantization Tradeoffs for Optimal Compression [0.0]
Lossy compression algorithms target tradeoffs by quantizating high frequency data to increase compression rates.
We propose a new method of optimizing quantization tables using Deep Learning and a minimax loss function.
arXiv Detail & Related papers (2021-12-14T07:35:22Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Substitutional Neural Image Compression [48.20906717052056]
Substitutional Neural Image Compression (SNIC) is a general approach for enhancing any neural image compression model.
It boosts compression performance toward a flexible distortion metric and enables bit-rate control using a single model instance.
arXiv Detail & Related papers (2021-05-16T20:53:31Z) - Regularization by Denoising Sub-sampled Newton Method for Spectral CT
Multi-Material Decomposition [78.37855832568569]
We propose to solve a model-based maximum-a-posterior problem to reconstruct multi-materials images with application to spectral CT.
In particular, we propose to solve a regularized optimization problem based on a plug-in image-denoising function.
We show numerical and experimental results for spectral CT materials decomposition.
arXiv Detail & Related papers (2021-03-25T15:20:10Z) - An Efficient Statistical-based Gradient Compression Technique for
Distributed Training Systems [77.88178159830905]
Sparsity-Inducing Distribution-based Compression (SIDCo) is a threshold-based sparsification scheme that enjoys similar threshold estimation quality to deep gradient compression (DGC)
Our evaluation shows SIDCo speeds up training by up to 41:7%, 7:6%, and 1:9% compared to the no-compression baseline, Topk, and DGC compressors, respectively.
arXiv Detail & Related papers (2021-01-26T13:06:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.