Picking Up Quantization Steps for Compressed Image Classification
- URL: http://arxiv.org/abs/2304.10714v1
- Date: Fri, 21 Apr 2023 02:56:13 GMT
- Title: Picking Up Quantization Steps for Compressed Image Classification
- Authors: Li Ma, Peixi Peng, Guangyao Chen, Yifan Zhao, Siwei Dong and Yonghong
Tian
- Abstract summary: We argue that neglected disposable coding parameters stored in compressed files could be picked up to reduce the sensitivity of deep neural networks to compressed images.
Specifically, we resort to using one of the representative parameters, quantization steps, to facilitate image classification.
The proposed method significantly improves the performance of classification networks on CIFAR-10, CIFAR-100, and ImageNet.
- Score: 41.065275887759945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The sensitivity of deep neural networks to compressed images hinders their
usage in many real applications, which means classification networks may fail
just after taking a screenshot and saving it as a compressed file. In this
paper, we argue that neglected disposable coding parameters stored in
compressed files could be picked up to reduce the sensitivity of deep neural
networks to compressed images. Specifically, we resort to using one of the
representative parameters, quantization steps, to facilitate image
classification. Firstly, based on quantization steps, we propose a novel
quantization aware confidence (QAC), which is utilized as sample weights to
reduce the influence of quantization on network training. Secondly, we utilize
quantization steps to alleviate the variance of feature distributions, where a
quantization aware batch normalization (QABN) is proposed to replace batch
normalization of classification networks. Extensive experiments show that the
proposed method significantly improves the performance of classification
networks on CIFAR-10, CIFAR-100, and ImageNet. The code is released on
https://github.com/LiMaPKU/QSAM.git
Related papers
- DeepHQ: Learned Hierarchical Quantizer for Progressive Deep Image Coding [27.875207681547074]
progressive image coding (PIC) aims to compress various qualities of images into a single bitstream.
Research on neural network (NN)-based PIC is in its early stages.
We propose an NN-based progressive coding method that firstly utilizes learned quantization step sizes via learning for each quantization layer.
arXiv Detail & Related papers (2024-08-22T06:32:53Z) - AdaBM: On-the-Fly Adaptive Bit Mapping for Image Super-Resolution [53.23803932357899]
We introduce the first on-the-fly adaptive quantization framework that accelerates the processing time from hours to seconds.
We achieve competitive performance with the previous adaptive quantization methods, while the processing time is accelerated by x2000.
arXiv Detail & Related papers (2024-04-04T08:37:27Z) - Neural Image Compression with Quantization Rectifier [7.097091519502871]
We develop a novel quantization (QR) method for image compression that leverages image feature correlation to mitigate the impact of quantization.
Our method designs a neural network architecture that predicts unquantized features from the quantized ones.
In evaluation, we integrate QR into state-of-the-art neural image codecs and compare enhanced models and baselines on the widely-used Kodak benchmark.
arXiv Detail & Related papers (2024-03-25T22:26:09Z) - Post-Training Quantization for Re-parameterization via Coarse & Fine
Weight Splitting [13.270381125055275]
We propose a coarse & fine weight splitting (CFWS) method to reduce quantization error of weight.
We develop an improved KL metric to determine optimal quantization scales for activation.
For example, the quantized RepVGG-A1 model exhibits a mere 0.3% accuracy loss.
arXiv Detail & Related papers (2023-12-17T02:31:20Z) - Crowd Counting on Heavily Compressed Images with Curriculum Pre-Training [90.76576712433595]
Applying lossy compression on images processed by deep neural networks can lead to significant accuracy degradation.
Inspired by the curriculum learning paradigm, we present a novel training approach called curriculum pre-training (CPT) for crowd counting on compressed images.
arXiv Detail & Related papers (2022-08-15T08:43:21Z) - CADyQ: Content-Aware Dynamic Quantization for Image Super-Resolution [55.50793823060282]
We propose a novel Content-Aware Dynamic Quantization (CADyQ) method for image super-resolution (SR) networks.
CADyQ allocates optimal bits to local regions and layers adaptively based on the local contents of an input image.
The pipeline has been tested on various SR networks and evaluated on several standard benchmarks.
arXiv Detail & Related papers (2022-07-21T07:50:50Z) - Cluster-Promoting Quantization with Bit-Drop for Minimizing Network
Quantization Loss [61.26793005355441]
Cluster-Promoting Quantization (CPQ) finds the optimal quantization grids for neural networks.
DropBits is a new bit-drop technique that revises the standard dropout regularization to randomly drop bits instead of neurons.
We experimentally validate our method on various benchmark datasets and network architectures.
arXiv Detail & Related papers (2021-09-05T15:15:07Z) - Direct Quantization for Training Highly Accurate Low Bit-width Deep
Neural Networks [73.29587731448345]
This paper proposes two novel techniques to train deep convolutional neural networks with low bit-width weights and activations.
First, to obtain low bit-width weights, most existing methods obtain the quantized weights by performing quantization on the full-precision network weights.
Second, to obtain low bit-width activations, existing works consider all channels equally.
arXiv Detail & Related papers (2020-12-26T15:21:18Z) - Cross-filter compression for CNN inference acceleration [4.324080238456531]
We propose a new cross-filter compression method that can provide $sim32times$ memory savings and $122times$ speed up in convolution operations.
Our method, based on Binary-Weight and XNOR-Net separately, is evaluated on CIFAR-10 and ImageNet dataset.
arXiv Detail & Related papers (2020-05-18T19:06:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.