Wavelet Feature Maps Compression for Image-to-Image CNNs
- URL: http://arxiv.org/abs/2205.12268v1
- Date: Tue, 24 May 2022 20:29:19 GMT
- Title: Wavelet Feature Maps Compression for Image-to-Image CNNs
- Authors: Shahaf E. Finder, Yair Zohav, Maor Ashkenazi, Eran Treister
- Abstract summary: We propose a novel approach for high-resolution activation maps compression integrated with point-wise convolutions.
We achieve compression rates equivalent to 1-4bit activation quantization with relatively small and much more graceful degradation in performance.
- Score: 3.1542695050861544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional Neural Networks (CNNs) are known for requiring extensive
computational resources, and quantization is among the best and most common
methods for compressing them. While aggressive quantization (i.e., less than
4-bits) performs well for classification, it may cause severe performance
degradation in image-to-image tasks such as semantic segmentation and depth
estimation. In this paper, we propose Wavelet Compressed Convolution (WCC) -- a
novel approach for high-resolution activation maps compression integrated with
point-wise convolutions, which are the main computational cost of modern
architectures. To this end, we use an efficient and hardware-friendly
Haar-wavelet transform, known for its effectiveness in image compression, and
define the convolution on the compressed activation map. We experiment on
various tasks, that benefit from high-resolution input, and by combining WCC
with light quantization, we achieve compression rates equivalent to 1-4bit
activation quantization with relatively small and much more graceful
degradation in performance.
Related papers
- DeepHQ: Learned Hierarchical Quantizer for Progressive Deep Image Coding [27.875207681547074]
progressive image coding (PIC) aims to compress various qualities of images into a single bitstream.
Research on neural network (NN)-based PIC is in its early stages.
We propose an NN-based progressive coding method that firstly utilizes learned quantization step sizes via learning for each quantization layer.
arXiv Detail & Related papers (2024-08-22T06:32:53Z) - Streaming Lossless Volumetric Compression of Medical Images Using Gated
Recurrent Convolutional Neural Network [0.0]
This paper introduces a hardware-friendly streaming lossless volumetric compression framework.
We propose a gated recurrent convolutional neural network that combines diverse convolutional structures and fusion gate mechanisms.
Our method exhibits robust generalization ability and competitive compression speed.
arXiv Detail & Related papers (2023-11-27T07:19:09Z) - Advance quantum image representation and compression using DCTEFRQI
approach [0.5735035463793007]
We have proposed a DCTEFRQI (Direct Cosine Transform Efficient Flexible Representation of Quantum Image) algorithm to represent and compress gray image efficiently.
The objective of this work is to represent and compress various gray image size in quantum computer using DCT(Discrete Cosine Transform) and EFRQI (Efficient Flexible Representation of Quantum Image) approach together.
arXiv Detail & Related papers (2022-08-30T13:54:09Z) - Exploring Structural Sparsity in Neural Image Compression [14.106763725475469]
We propose a plug-in adaptive binary channel masking(ABCM) to judge the importance of each convolution channel and introduce sparsity during training.
During inference, the unimportant channels are pruned to obtain slimmer network and less computation.
Experiment results show that up to 7x computation reduction and 3x acceleration can be achieved with negligible performance drop.
arXiv Detail & Related papers (2022-02-09T17:46:49Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - Modeling Image Quantization Tradeoffs for Optimal Compression [0.0]
Lossy compression algorithms target tradeoffs by quantizating high frequency data to increase compression rates.
We propose a new method of optimizing quantization tables using Deep Learning and a minimax loss function.
arXiv Detail & Related papers (2021-12-14T07:35:22Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Variable-Rate Deep Image Compression through Spatially-Adaptive Feature
Transform [58.60004238261117]
We propose a versatile deep image compression network based on Spatial Feature Transform (SFT arXiv:1804.02815)
Our model covers a wide range of compression rates using a single model, which is controlled by arbitrary pixel-wise quality maps.
The proposed framework allows us to perform task-aware image compressions for various tasks.
arXiv Detail & Related papers (2021-08-21T17:30:06Z) - Towards Compact CNNs via Collaborative Compression [166.86915086497433]
We propose a Collaborative Compression scheme, which joints channel pruning and tensor decomposition to compress CNN models.
We achieve 52.9% FLOPs reduction by removing 48.4% parameters on ResNet-50 with only a Top-1 accuracy drop of 0.56% on ImageNet 2012.
arXiv Detail & Related papers (2021-05-24T12:07:38Z) - Early Exit or Not: Resource-Efficient Blind Quality Enhancement for
Compressed Images [54.40852143927333]
Lossy image compression is pervasively conducted to save communication bandwidth, resulting in undesirable compression artifacts.
We propose a resource-efficient blind quality enhancement (RBQE) approach for compressed images.
Our approach can automatically decide to terminate or continue enhancement according to the assessed quality of enhanced images.
arXiv Detail & Related papers (2020-06-30T07:38:47Z) - Quantization Guided JPEG Artifact Correction [69.04777875711646]
We develop a novel architecture for artifact correction using the JPEG files quantization matrix.
This allows our single model to achieve state-of-the-art performance over models trained for specific quality settings.
arXiv Detail & Related papers (2020-04-17T00:10:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.