An Implementation of Vector Quantization using the Genetic Algorithm
Approach
- URL: http://arxiv.org/abs/2102.08893v1
- Date: Tue, 16 Feb 2021 03:57:13 GMT
- Title: An Implementation of Vector Quantization using the Genetic Algorithm
Approach
- Authors: Maha Mohammed Khan
- Abstract summary: This paper discusses some of the implementations of image compression algorithms that use techniques such as Artificial Neural Networks, Residual Learning, Fuzzy Neural Networks, Convolutional Neural Nets, Deep Learning, Genetic Algorithms.
The paper also describes an implementation of Vector Quantization using GA to generate codebook which is used for Lossy image compression.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The application of machine learning(ML) and genetic programming(GP) to the
image compression domain has produced promising results in many cases. The need
for compression arises due to the exorbitant size of data shared on the
internet. Compression is required for text, videos, or images, which are used
almost everywhere on web be it news articles, social media posts, blogs,
educational platforms, medical domain, government services, and many other
websites, need packets for transmission and hence compression is necessary to
avoid overwhelming the network. This paper discusses some of the
implementations of image compression algorithms that use techniques such as
Artificial Neural Networks, Residual Learning, Fuzzy Neural Networks,
Convolutional Neural Nets, Deep Learning, Genetic Algorithms. The paper also
describes an implementation of Vector Quantization using GA to generate
codebook which is used for Lossy image compression. All these approaches prove
to be very contrasting to the standard approaches to processing images due to
the highly parallel and computationally extensive nature of machine learning
algorithms. Such non-linear abilities of ML and GP make it widely popular for
use in multiple domains. Traditional approaches are also combined with
artificially intelligent systems, leading to hybrid systems, to achieve better
results.
Related papers
- Exploiting Inter-Image Similarity Prior for Low-Bitrate Remote Sensing Image Compression [10.427300958330816]
We propose a codebook-based RS image compression (Code-RSIC) method with a generated discrete codebook.
The code significantly outperforms state-of-the-art traditional and learning-based image compression algorithms in terms of perception quality.
arXiv Detail & Related papers (2024-07-17T03:33:16Z) - UniCompress: Enhancing Multi-Data Medical Image Compression with Knowledge Distillation [59.3877309501938]
Implicit Neural Representation (INR) networks have shown remarkable versatility due to their flexible compression ratios.
We introduce a codebook containing frequency domain information as a prior input to the INR network.
This enhances the representational power of INR and provides distinctive conditioning for different image blocks.
arXiv Detail & Related papers (2024-05-27T05:52:13Z) - Streaming Lossless Volumetric Compression of Medical Images Using Gated
Recurrent Convolutional Neural Network [0.0]
This paper introduces a hardware-friendly streaming lossless volumetric compression framework.
We propose a gated recurrent convolutional neural network that combines diverse convolutional structures and fusion gate mechanisms.
Our method exhibits robust generalization ability and competitive compression speed.
arXiv Detail & Related papers (2023-11-27T07:19:09Z) - Convolutional Neural Network (CNN) to reduce construction loss in JPEG
compression caused by Discrete Fourier Transform (DFT) [0.0]
Convolutional Neural Networks (CNN) have received more attention than most other types of deep neural networks.
In this work, an effective image compression method is purposed using autoencoders.
arXiv Detail & Related papers (2022-08-26T12:46:16Z) - COIN++: Data Agnostic Neural Compression [55.27113889737545]
COIN++ is a neural compression framework that seamlessly handles a wide range of data modalities.
We demonstrate the effectiveness of our method by compressing various data modalities.
arXiv Detail & Related papers (2022-01-30T20:12:04Z) - Learning-Driven Lossy Image Compression; A Comprehensive Survey [3.1761172592339375]
This paper aims to survey recent techniques utilizing mostly lossy image compression using machine learning (ML) architectures.
We divide all of the algorithms into several groups based on architecture.
Various discoveries for the researchers are emphasized and possible future directions for researchers.
arXiv Detail & Related papers (2022-01-23T12:11:31Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Image Compression with Recurrent Neural Network and Generalized Divisive
Normalization [3.0204520109309843]
Deep learning has gained huge attention from the research community and produced promising image reconstruction results.
Recent methods focused on developing deeper and more complex networks, which significantly increased network complexity.
In this paper, two effective novel blocks are developed: analysis and block synthesis that employs the convolution layer and Generalized Divisive Normalization (GDN) in the variable-rate encoder and decoder side.
arXiv Detail & Related papers (2021-09-05T05:31:55Z) - An Information Theory-inspired Strategy for Automatic Network Pruning [88.51235160841377]
Deep convolution neural networks are well known to be compressed on devices with resource constraints.
Most existing network pruning methods require laborious human efforts and prohibitive computation resources.
We propose an information theory-inspired strategy for automatic model compression.
arXiv Detail & Related papers (2021-08-19T07:03:22Z) - Permute, Quantize, and Fine-tune: Efficient Compression of Neural
Networks [70.0243910593064]
Key to success of vector quantization is deciding which parameter groups should be compressed together.
In this paper we make the observation that the weights of two adjacent layers can be permuted while expressing the same function.
We then establish a connection to rate-distortion theory and search for permutations that result in networks that are easier to compress.
arXiv Detail & Related papers (2020-10-29T15:47:26Z) - PowerGossip: Practical Low-Rank Communication Compression in
Decentralized Deep Learning [62.440827696638664]
We introduce a simple algorithm that directly compresses the model differences between neighboring workers.
Inspired by the PowerSGD for centralized deep learning, this algorithm uses power steps to maximize the information transferred per bit.
arXiv Detail & Related papers (2020-08-04T09:14:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.