Analysis of the Effect of Low-Overhead Lossy Image Compression on the
Performance of Visual Crowd Counting for Smart City Applications
- URL: http://arxiv.org/abs/2207.10155v1
- Date: Wed, 20 Jul 2022 19:20:03 GMT
- Title: Analysis of the Effect of Low-Overhead Lossy Image Compression on the
Performance of Visual Crowd Counting for Smart City Applications
- Authors: Arian Bakhtiarnia, B{\l}a\.zej Leporowski, Lukas Esterle and
Alexandros Iosifidis
- Abstract summary: Lossy image compression techniques can reduce the quality of the images, leading to accuracy degradation.
In this paper, we analyze the effect of applying low-overhead lossy image compression methods on the accuracy of visual crowd counting.
- Score: 78.55896581882595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Images and video frames captured by cameras placed throughout smart cities
are often transmitted over the network to a server to be processed by deep
neural networks for various tasks. Transmission of raw images, i.e., without
any form of compression, requires high bandwidth and can lead to congestion
issues and delays in transmission. The use of lossy image compression
techniques can reduce the quality of the images, leading to accuracy
degradation. In this paper, we analyze the effect of applying low-overhead
lossy image compression methods on the accuracy of visual crowd counting, and
measure the trade-off between bandwidth reduction and the obtained accuracy.
Related papers
- Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - Are Visual Recognition Models Robust to Image Compression? [23.280147529096908]
We analyze the impact of image compression on visual recognition tasks.
We consider a wide range of compression levels, ranging from 0.1 to 2 bits-per-pixel (bpp)
We find that for all three tasks, the recognition ability is significantly impacted when using strong compression.
arXiv Detail & Related papers (2023-04-10T11:30:11Z) - Improving Multi-generation Robustness of Learned Image Compression [16.86614420872084]
We show that LIC can achieve comparable performance to the first compression of BPG even after 50 times reencoding without any change of the network structure.
arXiv Detail & Related papers (2022-10-31T03:26:11Z) - Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression [85.93207826513192]
We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
arXiv Detail & Related papers (2022-09-11T12:11:56Z) - Convolutional Neural Network (CNN) to reduce construction loss in JPEG
compression caused by Discrete Fourier Transform (DFT) [0.0]
Convolutional Neural Networks (CNN) have received more attention than most other types of deep neural networks.
In this work, an effective image compression method is purposed using autoencoders.
arXiv Detail & Related papers (2022-08-26T12:46:16Z) - Crowd Counting on Heavily Compressed Images with Curriculum Pre-Training [90.76576712433595]
Applying lossy compression on images processed by deep neural networks can lead to significant accuracy degradation.
Inspired by the curriculum learning paradigm, we present a novel training approach called curriculum pre-training (CPT) for crowd counting on compressed images.
arXiv Detail & Related papers (2022-08-15T08:43:21Z) - Optimizing Image Compression via Joint Learning with Denoising [49.83680496296047]
High levels of noise usually exist in today's captured images due to the relatively small sensors equipped in the smartphone cameras.
We propose a novel two-branch, weight-sharing architecture with plug-in feature denoisers to allow a simple and effective realization of the goal with little computational cost.
arXiv Detail & Related papers (2022-07-22T04:23:01Z) - Modeling Image Quantization Tradeoffs for Optimal Compression [0.0]
Lossy compression algorithms target tradeoffs by quantizating high frequency data to increase compression rates.
We propose a new method of optimizing quantization tables using Deep Learning and a minimax loss function.
arXiv Detail & Related papers (2021-12-14T07:35:22Z) - Soft Compression for Lossless Image Coding [17.714164324169037]
We propose a new concept, compressible indicator function with regard to image.
It is expected that the bandwidth and storage space needed when transmitting and storing the same kind of images can be greatly reduced by applying soft compression.
arXiv Detail & Related papers (2020-12-11T10:59:47Z) - Discernible Image Compression [124.08063151879173]
This paper aims to produce compressed images by pursuing both appearance and perceptual consistency.
Based on the encoder-decoder framework, we propose using a pre-trained CNN to extract features of the original and compressed images.
Experiments on benchmarks demonstrate that images compressed by using the proposed method can also be well recognized by subsequent visual recognition and detection models.
arXiv Detail & Related papers (2020-02-17T07:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.