Frequent Pattern Mining approach to Image Compression
- URL: http://arxiv.org/abs/2602.00100v1
- Date: Sat, 24 Jan 2026 20:32:16 GMT
- Title: Frequent Pattern Mining approach to Image Compression
- Authors: Avinash Kadimisetty, C. Oswald, B. Sivalselvan,
- Abstract summary: The paper focuses on Image Compression, explaining efficient approaches based on Frequent Pattern Mining(FPM)<n>The proposed compression mechanism is based on clustering similar pixels in the image and thus using cluster identifiers in image compression.<n>Redundant data in the image is effectively handled by replacing the DCT phase of conventional JPEG through a mixture of k-means Clustering and Closed Frequent Sequence Mining.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper focuses on Image Compression, explaining efficient approaches based on Frequent Pattern Mining(FPM). The proposed compression mechanism is based on clustering similar pixels in the image and thus using cluster identifiers in image compression. Redundant data in the image is effectively handled by replacing the DCT phase of conventional JPEG through a mixture of k-means Clustering and Closed Frequent Sequence Mining. To optimize the cardinality of pattern(s) in encoding, efficient pruning techniques have been used through the refinement of Conventional Generalized Sequential Pattern Mining(GSP) algorithm. We have proposed a mechanism for finding the frequency of a sequence which will yield significant reduction in the code table size. The algorithm is tested by compressing benchmark datasets yielding an improvement of 45% in compression ratios, often outperforming the existing alternatives. PSNR and SSIM, which are the image quality metrics, have been tested which show a negligible loss in visual quality.
Related papers
- Lossy Image Compression -- A Frequent Sequence Mining perspective employing efficient Clustering [0.5833117322405447]
This work explores the scope of Frequent Sequence Mining in the domain of Lossy Image Compression.<n>The DCT phase in JPEG is replaced with a combination of closed frequent sequence mining and k-means clustering to handle the redundant data effectively.
arXiv Detail & Related papers (2026-01-24T20:44:55Z) - Image Compression Using Singular Value Decomposition [0.0]
This study investigates the use of Singular Value Decomposition and low-rank matrix approximations for image compression.<n>Results show that the low-rank approximations often produce images that appear visually similar to the originals.<n>At low tolerated error levels, the compressed representation produced by Singular Value Decomposition can even exceed the size of the original image.
arXiv Detail & Related papers (2025-12-18T06:18:37Z) - FD-LSCIC: Frequency Decomposition-based Learned Screen Content Image Compression [67.34466255300339]
This paper addresses three key challenges in SC image compression: learning compact latent features, adapting quantization step sizes, and the lack of large SC datasets.<n>We introduce an adaptive quantization module that learns scaled uniform noise for each frequency component, enabling flexible control over quantization granularity.<n>We construct a large SC image compression dataset (SDU-SCICD10K), which includes over 10,000 images spanning basic SC images, computer-rendered images, and mixed NS and SC images from both PC and mobile platforms.
arXiv Detail & Related papers (2025-02-21T03:15:16Z) - Quantization-aware Matrix Factorization for Low Bit Rate Image Compression [8.009813033356478]
Lossy image compression is essential for efficient transmission and storage.<n>We introduce a quantization-aware matrix factorization (QMF) to develop a novel lossy image compression method.<n>Our method consistently outperforms JPEG at low bit rates below 0.25 bits per pixel (bpp) and remains comparable at higher bit rates.
arXiv Detail & Related papers (2024-08-22T19:08:08Z) - Learned Image Compression for HE-stained Histopathological Images via Stain Deconvolution [33.69980388844034]
In this paper, we show that the commonly used JPEG algorithm is not best suited for further compression.
We propose Stain Quantized Latent Compression, a novel DL based histopathology data compression approach.
We show that our approach yields superior performance in a classification downstream task, compared to traditional approaches like JPEG.
arXiv Detail & Related papers (2024-06-18T13:47:17Z) - Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - You Can Mask More For Extremely Low-Bitrate Image Compression [80.7692466922499]
Learned image compression (LIC) methods have experienced significant progress during recent years.
LIC methods fail to explicitly explore the image structure and texture components crucial for image compression.
We present DA-Mask that samples visible patches based on the structure and texture of original images.
We propose a simple yet effective masked compression model (MCM), the first framework that unifies LIC and LIC end-to-end for extremely low-bitrate compression.
arXiv Detail & Related papers (2023-06-27T15:36:22Z) - Learned Lossless Compression for JPEG via Frequency-Domain Prediction [50.20577108662153]
We propose a novel framework for learned lossless compression of JPEG images.
To enable learning in the frequency domain, DCT coefficients are partitioned into groups to utilize implicit local redundancy.
An autoencoder-like architecture is designed based on the weight-shared blocks to realize entropy modeling of grouped DCT coefficients.
arXiv Detail & Related papers (2023-03-05T13:15:28Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - Modeling Image Quantization Tradeoffs for Optimal Compression [0.0]
Lossy compression algorithms target tradeoffs by quantizating high frequency data to increase compression rates.
We propose a new method of optimizing quantization tables using Deep Learning and a minimax loss function.
arXiv Detail & Related papers (2021-12-14T07:35:22Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - A GAN-based Tunable Image Compression System [13.76136694287327]
This paper rethinks content-based compression by using Generative Adversarial Network (GAN) to reconstruct the non-important regions.
A tunable compression scheme is also proposed in this paper to compress an image to any specific compression ratio without retraining the model.
arXiv Detail & Related papers (2020-01-18T02:40:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.