Crowd Counting on Heavily Compressed Images with Curriculum Pre-Training
- URL: http://arxiv.org/abs/2208.07075v1
- Date: Mon, 15 Aug 2022 08:43:21 GMT
- Title: Crowd Counting on Heavily Compressed Images with Curriculum Pre-Training
- Authors: Arian Bakhtiarnia, Qi Zhang and Alexandros Iosifidis
- Abstract summary: Applying lossy compression on images processed by deep neural networks can lead to significant accuracy degradation.
Inspired by the curriculum learning paradigm, we present a novel training approach called curriculum pre-training (CPT) for crowd counting on compressed images.
- Score: 90.76576712433595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: JPEG image compression algorithm is a widely used technique for image size
reduction in edge and cloud computing settings. However, applying such lossy
compression on images processed by deep neural networks can lead to significant
accuracy degradation. Inspired by the curriculum learning paradigm, we present
a novel training approach called curriculum pre-training (CPT) for crowd
counting on compressed images, which alleviates the drop in accuracy resulting
from lossy compression. We verify the effectiveness of our approach by
extensive experiments on three crowd counting datasets, two crowd counting DNN
models and various levels of compression. Our proposed training method is not
overly sensitive to hyper-parameters, and reduces the error, particularly for
heavily compressed images, by up to 19.70%.
Related papers
- Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - Deep learning based Image Compression for Microscopy Images: An
Empirical Study [3.915183869199319]
This study analyzes classic and deep learning based image compression methods, and their impact on deep learning based image processing models.
To compress images in such a wanted way, multiple classical lossy image compression techniques are compared to several AI-based compression models.
We found that AI-based compression techniques largely outperform the classic ones and will minimally affect the downstream label-free task in 2D cases.
arXiv Detail & Related papers (2023-11-02T16:00:32Z) - Convolutional Neural Network (CNN) to reduce construction loss in JPEG
compression caused by Discrete Fourier Transform (DFT) [0.0]
Convolutional Neural Networks (CNN) have received more attention than most other types of deep neural networks.
In this work, an effective image compression method is purposed using autoencoders.
arXiv Detail & Related papers (2022-08-26T12:46:16Z) - Estimating the Resize Parameter in End-to-end Learned Image Compression [50.20567320015102]
We describe a search-free resizing framework that can further improve the rate-distortion tradeoff of recent learned image compression models.
Our results show that our new resizing parameter estimation framework can provide Bjontegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines.
arXiv Detail & Related papers (2022-04-26T01:35:02Z) - Modeling Image Quantization Tradeoffs for Optimal Compression [0.0]
Lossy compression algorithms target tradeoffs by quantizating high frequency data to increase compression rates.
We propose a new method of optimizing quantization tables using Deep Learning and a minimax loss function.
arXiv Detail & Related papers (2021-12-14T07:35:22Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Learned Image Compression for Machine Perception [17.40776913809306]
We develop a framework that produces a compression format suitable for both human perception and machine perception.
We show that representations can be learned that simultaneously optimize for compression and performance on core vision tasks.
arXiv Detail & Related papers (2021-11-03T14:39:09Z) - Analyzing and Mitigating JPEG Compression Defects in Deep Learning [69.04777875711646]
We present a unified study of the effects of JPEG compression on a range of common tasks and datasets.
We show that there is a significant penalty on common performance metrics for high compression.
arXiv Detail & Related papers (2020-11-17T20:32:57Z) - Discernible Image Compression [124.08063151879173]
This paper aims to produce compressed images by pursuing both appearance and perceptual consistency.
Based on the encoder-decoder framework, we propose using a pre-trained CNN to extract features of the original and compressed images.
Experiments on benchmarks demonstrate that images compressed by using the proposed method can also be well recognized by subsequent visual recognition and detection models.
arXiv Detail & Related papers (2020-02-17T07:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.