CompaCT: Fractal-Based Heuristic Pixel Segmentation for Lossless
Compression of High-Color DICOM Medical Images
- URL: http://arxiv.org/abs/2308.13097v1
- Date: Thu, 24 Aug 2023 21:43:04 GMT
- Title: CompaCT: Fractal-Based Heuristic Pixel Segmentation for Lossless
Compression of High-Color DICOM Medical Images
- Authors: Taaha Khan
- Abstract summary: Medical images require a high color depth of 12 bits per pixel component for accurate analysis by physicians.
Standard-based compression of images via filtering is well-known; however, it remains suboptimal in the medical domain due to non-specialized implementations.
This study proposes a medical image compression algorithm, CompaCT, that aims to target spatial features and patterns of pixel concentration for dynamically enhanced data processing.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Medical image compression is a widely studied field of data processing due to
its prevalence in modern digital databases. This domain requires a high color
depth of 12 bits per pixel component for accurate analysis by physicians,
primarily in the DICOM format. Standard raster-based compression of images via
filtering is well-known; however, it remains suboptimal in the medical domain
due to non-specialized implementations. This study proposes a lossless medical
image compression algorithm, CompaCT, that aims to target spatial features and
patterns of pixel concentration for dynamically enhanced data processing. The
algorithm employs fractal pixel traversal coupled with a novel approach of
segmentation and meshing between pixel blocks for preprocessing. Furthermore,
delta and entropy coding are applied to this concept for a complete compression
pipeline. The proposal demonstrates that the data compression achieved via
fractal segmentation preprocessing yields enhanced image compression results
while remaining lossless in its reconstruction accuracy. CompaCT is evaluated
in its compression ratios on 3954 high-color CT scans against the efficiency of
industry-standard compression techniques (i.e., JPEG2000, RLE, ZIP, PNG). Its
reconstruction performance is assessed with error metrics to verify lossless
image recovery after decompression. The results demonstrate that CompaCT can
compress and losslessly reconstruct medical images, being 37% more
space-efficient than industry-standard compression systems.
Related papers
- Learned Image Compression for HE-stained Histopathological Images via Stain Deconvolution [33.69980388844034]
In this paper, we show that the commonly used JPEG algorithm is not best suited for further compression.
We propose Stain Quantized Latent Compression, a novel DL based histopathology data compression approach.
We show that our approach yields superior performance in a classification downstream task, compared to traditional approaches like JPEG.
arXiv Detail & Related papers (2024-06-18T13:47:17Z) - UniCompress: Enhancing Multi-Data Medical Image Compression with Knowledge Distillation [59.3877309501938]
Implicit Neural Representation (INR) networks have shown remarkable versatility due to their flexible compression ratios.
We introduce a codebook containing frequency domain information as a prior input to the INR network.
This enhances the representational power of INR and provides distinctive conditioning for different image blocks.
arXiv Detail & Related papers (2024-05-27T05:52:13Z) - Streaming Lossless Volumetric Compression of Medical Images Using Gated
Recurrent Convolutional Neural Network [0.0]
This paper introduces a hardware-friendly streaming lossless volumetric compression framework.
We propose a gated recurrent convolutional neural network that combines diverse convolutional structures and fusion gate mechanisms.
Our method exhibits robust generalization ability and competitive compression speed.
arXiv Detail & Related papers (2023-11-27T07:19:09Z) - Learned Lossless Compression for JPEG via Frequency-Domain Prediction [50.20577108662153]
We propose a novel framework for learned lossless compression of JPEG images.
To enable learning in the frequency domain, DCT coefficients are partitioned into groups to utilize implicit local redundancy.
An autoencoder-like architecture is designed based on the weight-shared blocks to realize entropy modeling of grouped DCT coefficients.
arXiv Detail & Related papers (2023-03-05T13:15:28Z) - Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression [85.93207826513192]
We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
arXiv Detail & Related papers (2022-09-11T12:11:56Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Lossy Medical Image Compression using Residual Learning-based Dual
Autoencoder Model [12.762298148425794]
We propose a two-stage autoencoder based compressor-decompressor framework for compressing malaria RBC cell image patches.
The proposed residual-based dual autoencoder network is trained to extract the unique features which are then used to reconstruct the original image.
The algorithm exhibits a significant improvement in bit savings of 76%, 78%, 75% & 74% over JPEG-LS, JP2K-LM, CALIC and recent neural network approach respectively.
arXiv Detail & Related papers (2021-08-24T08:38:58Z) - Regularized Compression of MRI Data: Modular Optimization of Joint
Reconstruction and Coding [2.370481325034443]
We propose a framework for joint optimization of the MRI reconstruction and lossy compression.
Our method produces compressed representations of medical images that achieve improved trade-offs between quality and bit-rate.
Compared to regularization-based solutions, our optimization method provides PSNR gains between 0.5 to 1 dB at high bit-rates.
arXiv Detail & Related papers (2020-10-08T15:32:52Z) - Learning Better Lossless Compression Using Lossy Compression [100.50156325096611]
We leverage the powerful lossy image compression algorithm BPG to build a lossless image compression system.
We model the distribution of the residual with a convolutional neural network-based probabilistic model that is conditioned on the BPG reconstruction.
Finally, the image is stored using the concatenation of the bitstreams produced by BPG and the learned residual coder.
arXiv Detail & Related papers (2020-03-23T11:21:52Z) - Discernible Image Compression [124.08063151879173]
This paper aims to produce compressed images by pursuing both appearance and perceptual consistency.
Based on the encoder-decoder framework, we propose using a pre-trained CNN to extract features of the original and compressed images.
Experiments on benchmarks demonstrate that images compressed by using the proposed method can also be well recognized by subsequent visual recognition and detection models.
arXiv Detail & Related papers (2020-02-17T07:35:08Z) - A GAN-based Tunable Image Compression System [13.76136694287327]
This paper rethinks content-based compression by using Generative Adversarial Network (GAN) to reconstruct the non-important regions.
A tunable compression scheme is also proposed in this paper to compress an image to any specific compression ratio without retraining the model.
arXiv Detail & Related papers (2020-01-18T02:40:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.