Convolutional Neural Network (CNN) to reduce construction loss in JPEG
compression caused by Discrete Fourier Transform (DFT)
- URL: http://arxiv.org/abs/2209.03475v2
- Date: Sun, 2 Jul 2023 23:41:55 GMT
- Title: Convolutional Neural Network (CNN) to reduce construction loss in JPEG
compression caused by Discrete Fourier Transform (DFT)
- Authors: Suman Kunwar
- Abstract summary: Convolutional Neural Networks (CNN) have received more attention than most other types of deep neural networks.
In this work, an effective image compression method is purposed using autoencoders.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent decades, digital image processing has gained enormous popularity.
Consequently, a number of data compression strategies have been put forth, with
the goal of minimizing the amount of information required to represent images.
Among them, JPEG compression is one of the most popular methods that has been
widely applied in multimedia and digital applications. The periodic nature of
DFT makes it impossible to meet the periodic condition of an image's opposing
edges without producing severe artifacts, which lowers the image's perceptual
visual quality. On the other hand, deep learning has recently achieved
outstanding results for applications like speech recognition, image reduction,
and natural language processing. Convolutional Neural Networks (CNN) have
received more attention than most other types of deep neural networks. The use
of convolution in feature extraction results in a less redundant feature map
and a smaller dataset, both of which are crucial for image compression. In this
work, an effective image compression method is purposed using autoencoders. The
study's findings revealed a number of important trends that suggested better
reconstruction along with good compression can be achieved using autoencoders.
Related papers
- UniCompress: Enhancing Multi-Data Medical Image Compression with Knowledge Distillation [59.3877309501938]
Implicit Neural Representation (INR) networks have shown remarkable versatility due to their flexible compression ratios.
We introduce a codebook containing frequency domain information as a prior input to the INR network.
This enhances the representational power of INR and provides distinctive conditioning for different image blocks.
arXiv Detail & Related papers (2024-05-27T05:52:13Z) - Deep learning based Image Compression for Microscopy Images: An
Empirical Study [3.915183869199319]
This study analyzes classic and deep learning based image compression methods, and their impact on deep learning based image processing models.
To compress images in such a wanted way, multiple classical lossy image compression techniques are compared to several AI-based compression models.
We found that AI-based compression techniques largely outperform the classic ones and will minimally affect the downstream label-free task in 2D cases.
arXiv Detail & Related papers (2023-11-02T16:00:32Z) - Crowd Counting on Heavily Compressed Images with Curriculum Pre-Training [90.76576712433595]
Applying lossy compression on images processed by deep neural networks can lead to significant accuracy degradation.
Inspired by the curriculum learning paradigm, we present a novel training approach called curriculum pre-training (CPT) for crowd counting on compressed images.
arXiv Detail & Related papers (2022-08-15T08:43:21Z) - Analysis of the Effect of Low-Overhead Lossy Image Compression on the
Performance of Visual Crowd Counting for Smart City Applications [78.55896581882595]
Lossy image compression techniques can reduce the quality of the images, leading to accuracy degradation.
In this paper, we analyze the effect of applying low-overhead lossy image compression methods on the accuracy of visual crowd counting.
arXiv Detail & Related papers (2022-07-20T19:20:03Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Image Compression with Recurrent Neural Network and Generalized Divisive
Normalization [3.0204520109309843]
Deep learning has gained huge attention from the research community and produced promising image reconstruction results.
Recent methods focused on developing deeper and more complex networks, which significantly increased network complexity.
In this paper, two effective novel blocks are developed: analysis and block synthesis that employs the convolution layer and Generalized Divisive Normalization (GDN) in the variable-rate encoder and decoder side.
arXiv Detail & Related papers (2021-09-05T05:31:55Z) - Enhanced Invertible Encoding for Learned Image Compression [40.21904131503064]
In this paper, we propose an enhanced Invertible.
Network with invertible neural networks (INNs) to largely mitigate the information loss problem for better compression.
Experimental results on the Kodak, CLIC, and Tecnick datasets show that our method outperforms the existing learned image compression methods.
arXiv Detail & Related papers (2021-08-08T17:32:10Z) - Analyzing and Mitigating JPEG Compression Defects in Deep Learning [69.04777875711646]
We present a unified study of the effects of JPEG compression on a range of common tasks and datasets.
We show that there is a significant penalty on common performance metrics for high compression.
arXiv Detail & Related papers (2020-11-17T20:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.