Lossy Medical Image Compression using Residual Learning-based Dual
Autoencoder Model
- URL: http://arxiv.org/abs/2108.10579v1
- Date: Tue, 24 Aug 2021 08:38:58 GMT
- Title: Lossy Medical Image Compression using Residual Learning-based Dual
Autoencoder Model
- Authors: Dipti Mishra, Satish Kumar Singh, Rajat Kumar Singh
- Abstract summary: We propose a two-stage autoencoder based compressor-decompressor framework for compressing malaria RBC cell image patches.
The proposed residual-based dual autoencoder network is trained to extract the unique features which are then used to reconstruct the original image.
The algorithm exhibits a significant improvement in bit savings of 76%, 78%, 75% & 74% over JPEG-LS, JP2K-LM, CALIC and recent neural network approach respectively.
- Score: 12.762298148425794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose a two-stage autoencoder based
compressor-decompressor framework for compressing malaria RBC cell image
patches. We know that the medical images used for disease diagnosis are around
multiple gigabytes size, which is quite huge. The proposed residual-based dual
autoencoder network is trained to extract the unique features which are then
used to reconstruct the original image through the decompressor module. The two
latent space representations (first for the original image and second for the
residual image) are used to rebuild the final original image. Color-SSIM has
been exclusively used to check the quality of the chrominance part of the cell
images after decompression. The empirical results indicate that the proposed
work outperformed other neural network related compression technique for
medical images by approximately 35%, 10% and 5% in PSNR, Color SSIM and MS-SSIM
respectively. The algorithm exhibits a significant improvement in bit savings
of 76%, 78%, 75% & 74% over JPEG-LS, JP2K-LM, CALIC and recent neural network
approach respectively, making it a good compression-decompression technique.
Related papers
- Recompression Based JPEG Tamper Detection and Localization Using Deep Neural Network Eliminating Compression Factor Dependency [2.8498944632323755]
We propose a Convolution Neural Network based deep learning architecture, which is capable of detecting the presence of re compression based forgery in JPEG images.
In this work, we also aim to localize the regions of image manipulation based on re compression features, using the trained neural network.
arXiv Detail & Related papers (2024-07-03T09:19:35Z) - UniCompress: Enhancing Multi-Data Medical Image Compression with Knowledge Distillation [59.3877309501938]
Implicit Neural Representation (INR) networks have shown remarkable versatility due to their flexible compression ratios.
We introduce a codebook containing frequency domain information as a prior input to the INR network.
This enhances the representational power of INR and provides distinctive conditioning for different image blocks.
arXiv Detail & Related papers (2024-05-27T05:52:13Z) - Image Compression and Decompression Framework Based on Latent Diffusion
Model for Breast Mammography [0.0]
This research presents a novel framework for the compression and decompression of medical images utilizing the Latent Diffusion Model (LDM)
The LDM represents advancement over the denoising diffusion probabilistic model (DDPM) with a potential to yield superior image quality.
A possible application of LDM and Torchvision for image upscaling has been explored using medical image data.
arXiv Detail & Related papers (2023-10-08T22:08:59Z) - CompaCT: Fractal-Based Heuristic Pixel Segmentation for Lossless Compression of High-Color DICOM Medical Images [0.0]
Medical images require a high color depth of 12 bits per pixel component for accurate analysis by physicians.
Standard-based compression of images via filtering is well-known; however, it remains suboptimal in the medical domain due to non-specialized implementations.
This study proposes a medical image compression algorithm, CompaCT, that aims to target spatial features and patterns of pixel concentration for dynamically enhanced data processing.
arXiv Detail & Related papers (2023-08-24T21:43:04Z) - You Can Mask More For Extremely Low-Bitrate Image Compression [80.7692466922499]
Learned image compression (LIC) methods have experienced significant progress during recent years.
LIC methods fail to explicitly explore the image structure and texture components crucial for image compression.
We present DA-Mask that samples visible patches based on the structure and texture of original images.
We propose a simple yet effective masked compression model (MCM), the first framework that unifies LIC and LIC end-to-end for extremely low-bitrate compression.
arXiv Detail & Related papers (2023-06-27T15:36:22Z) - Are Visual Recognition Models Robust to Image Compression? [23.280147529096908]
We analyze the impact of image compression on visual recognition tasks.
We consider a wide range of compression levels, ranging from 0.1 to 2 bits-per-pixel (bpp)
We find that for all three tasks, the recognition ability is significantly impacted when using strong compression.
arXiv Detail & Related papers (2023-04-10T11:30:11Z) - Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression [85.93207826513192]
We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
arXiv Detail & Related papers (2022-09-11T12:11:56Z) - Crowd Counting on Heavily Compressed Images with Curriculum Pre-Training [90.76576712433595]
Applying lossy compression on images processed by deep neural networks can lead to significant accuracy degradation.
Inspired by the curriculum learning paradigm, we present a novel training approach called curriculum pre-training (CPT) for crowd counting on compressed images.
arXiv Detail & Related papers (2022-08-15T08:43:21Z) - Learning Better Lossless Compression Using Lossy Compression [100.50156325096611]
We leverage the powerful lossy image compression algorithm BPG to build a lossless image compression system.
We model the distribution of the residual with a convolutional neural network-based probabilistic model that is conditioned on the BPG reconstruction.
Finally, the image is stored using the concatenation of the bitstreams produced by BPG and the learned residual coder.
arXiv Detail & Related papers (2020-03-23T11:21:52Z) - Discernible Image Compression [124.08063151879173]
This paper aims to produce compressed images by pursuing both appearance and perceptual consistency.
Based on the encoder-decoder framework, we propose using a pre-trained CNN to extract features of the original and compressed images.
Experiments on benchmarks demonstrate that images compressed by using the proposed method can also be well recognized by subsequent visual recognition and detection models.
arXiv Detail & Related papers (2020-02-17T07:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.