Learning-Driven Lossy Image Compression; A Comprehensive Survey
- URL: http://arxiv.org/abs/2201.09240v1
- Date: Sun, 23 Jan 2022 12:11:31 GMT
- Title: Learning-Driven Lossy Image Compression; A Comprehensive Survey
- Authors: Sonain Jamil, Md. Jalil Piran, and MuhibUrRahman
- Abstract summary: This paper aims to survey recent techniques utilizing mostly lossy image compression using machine learning (ML) architectures.
We divide all of the algorithms into several groups based on architecture.
Various discoveries for the researchers are emphasized and possible future directions for researchers.
- Score: 3.1761172592339375
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the realm of image processing and computer vision (CV), machine learning
(ML) architectures are widely applied. Convolutional neural networks (CNNs)
solve a wide range of image processing issues and can solve image compression
problem. Compression of images is necessary due to bandwidth and memory
constraints. Helpful, redundant, and irrelevant information are three different
forms of information found in images. This paper aims to survey recent
techniques utilizing mostly lossy image compression using ML architectures
including different auto-encoders (AEs) such as convolutional auto-encoders
(CAEs), variational auto-encoders (VAEs), and AEs with hyper-prior models,
recurrent neural networks (RNNs), CNNs, generative adversarial networks (GANs),
principal component analysis (PCA) and fuzzy means clustering. We divide all of
the algorithms into several groups based on architecture. We cover still image
compression in this survey. Various discoveries for the researchers are
emphasized and possible future directions for researchers. The open research
problems such as out of memory (OOM), striped region distortion (SRD),
aliasing, and compatibility of the frameworks with central processing unit
(CPU) and graphics processing unit (GPU) simultaneously are explained. The
majority of the publications in the compression domain surveyed are from the
previous five years and use a variety of approaches.
Related papers
- Bottleneck-based Encoder-decoder ARchitecture (BEAR) for Learning Unbiased Consumer-to-Consumer Image Representations [0.6990493129893112]
This paper presents different image feature extraction mechanisms that work together with residual connections to encode perceptual image information in an autoencoder configuration.
Preliminary results suggest that the proposed architecture can learn rich spaces using ours and other image datasets resolving important challenges that are identified.
arXiv Detail & Related papers (2024-09-10T03:31:18Z) - Computer Vision Model Compression Techniques for Embedded Systems: A Survey [75.38606213726906]
This paper covers the main model compression techniques applied for computer vision tasks.
We present the characteristics of compression subareas, compare different approaches, and discuss how to choose the best technique.
We also share codes to assist researchers and new practitioners in overcoming initial implementation challenges.
arXiv Detail & Related papers (2024-08-15T16:41:55Z) - UniCompress: Enhancing Multi-Data Medical Image Compression with Knowledge Distillation [59.3877309501938]
Implicit Neural Representation (INR) networks have shown remarkable versatility due to their flexible compression ratios.
We introduce a codebook containing frequency domain information as a prior input to the INR network.
This enhances the representational power of INR and provides distinctive conditioning for different image blocks.
arXiv Detail & Related papers (2024-05-27T05:52:13Z) - Deep learning based Image Compression for Microscopy Images: An
Empirical Study [3.915183869199319]
This study analyzes classic and deep learning based image compression methods, and their impact on deep learning based image processing models.
To compress images in such a wanted way, multiple classical lossy image compression techniques are compared to several AI-based compression models.
We found that AI-based compression techniques largely outperform the classic ones and will minimally affect the downstream label-free task in 2D cases.
arXiv Detail & Related papers (2023-11-02T16:00:32Z) - Convolutional Neural Network (CNN) to reduce construction loss in JPEG
compression caused by Discrete Fourier Transform (DFT) [0.0]
Convolutional Neural Networks (CNN) have received more attention than most other types of deep neural networks.
In this work, an effective image compression method is purposed using autoencoders.
arXiv Detail & Related papers (2022-08-26T12:46:16Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Image Compression with Recurrent Neural Network and Generalized Divisive
Normalization [3.0204520109309843]
Deep learning has gained huge attention from the research community and produced promising image reconstruction results.
Recent methods focused on developing deeper and more complex networks, which significantly increased network complexity.
In this paper, two effective novel blocks are developed: analysis and block synthesis that employs the convolution layer and Generalized Divisive Normalization (GDN) in the variable-rate encoder and decoder side.
arXiv Detail & Related papers (2021-09-05T05:31:55Z) - An Implementation of Vector Quantization using the Genetic Algorithm
Approach [0.0]
This paper discusses some of the implementations of image compression algorithms that use techniques such as Artificial Neural Networks, Residual Learning, Fuzzy Neural Networks, Convolutional Neural Nets, Deep Learning, Genetic Algorithms.
The paper also describes an implementation of Vector Quantization using GA to generate codebook which is used for Lossy image compression.
arXiv Detail & Related papers (2021-02-16T03:57:13Z) - CNNs for JPEGs: A Study in Computational Cost [49.97673761305336]
Convolutional neural networks (CNNs) have achieved astonishing advances over the past decade.
CNNs are capable of learning robust representations of the data directly from the RGB pixels.
Deep learning methods capable of learning directly from the compressed domain have been gaining attention in recent years.
arXiv Detail & Related papers (2020-12-26T15:00:10Z) - Learning End-to-End Lossy Image Compression: A Benchmark [90.35363142246806]
We first conduct a comprehensive literature survey of learned image compression methods.
We describe milestones in cutting-edge learned image-compression methods, review a broad range of existing works, and provide insights into their historical development routes.
By introducing a coarse-to-fine hyperprior model for entropy estimation and signal reconstruction, we achieve improved rate-distortion performance.
arXiv Detail & Related papers (2020-02-10T13:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.