Quantifying the effect of image compression on supervised learning
applications in optical microscopy
- URL: http://arxiv.org/abs/2009.12570v1
- Date: Sat, 26 Sep 2020 11:25:57 GMT
- Title: Quantifying the effect of image compression on supervised learning
applications in optical microscopy
- Authors: Enrico Pomarico, C\'edric Schmidt, Florian Chays, David Nguyen,
Arielle Planchette, Audrey Tissot, Adrien Roux, St\'ephane Pag\`es, Laura
Batti, Christoph Clausen, Theo Lasser, Aleksandra Radenovic, Bruno
Sanguinetti, and J\'er\^ome Extermann
- Abstract summary: Lossy image compression risks to produce unpredictable artifacts.
We show that predictions on object- and image-specific segmentation parameters can be altered by up to 15%.
Our technique can be generalized to validate a variety of data analysis pipelines in SL-assisted fields.
- Score: 41.74498230885008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The impressive growth of data throughput in optical microscopy has triggered
a widespread use of supervised learning (SL) models running on compressed image
datasets for efficient automated analysis. However, since lossy image
compression risks to produce unpredictable artifacts, quantifying the effect of
data compression on SL applications is of pivotal importance to assess their
reliability, especially for clinical use. We propose an experimental method to
evaluate the tolerability of image compression distortions in 2D and 3D cell
segmentation SL tasks: predictions on compressed data are compared to the raw
predictive uncertainty, which is numerically estimated from the raw noise
statistics measured through sensor calibration. We show that predictions on
object- and image-specific segmentation parameters can be altered by up to 15%
and more than 10 standard deviations after 16-to-8 bits downsampling or JPEG
compression. In contrast, a recently developed lossless compression algorithm
provides a prediction spread which is statistically equivalent to that stemming
from raw noise, while providing a compression ratio of up to 10:1. By setting a
lower bound to the SL predictive uncertainty, our technique can be generalized
to validate a variety of data analysis pipelines in SL-assisted fields.
Related papers
- CALLIC: Content Adaptive Learning for Lossless Image Compression [64.47244912937204]
CALLIC sets a new state-of-the-art (SOTA) for learned lossless image compression.
We propose a content-aware autoregressive self-attention mechanism by leveraging convolutional gating operations.
During encoding, we decompose pre-trained layers, including depth-wise convolutions, using low-rank matrices and then adapt the incremental weights on testing image by Rate-guided Progressive Fine-Tuning (RPFT)
RPFT fine-tunes with gradually increasing patches that are sorted in descending order by estimated entropy, optimizing learning process and reducing adaptation time.
arXiv Detail & Related papers (2024-12-23T10:41:18Z) - Unlocking the Potential of Digital Pathology: Novel Baselines for Compression [31.13721473800084]
Lossy compression can introduce color and texture disparities in pathological Whole Slide Images (WSI)
Deep learning models fine-tuned for perceptual quality outperform conventional compression schemes like JPEG-XL or WebP for further compression of WSI.
Our study provides novel insights for the assessment of lossy compression schemes for WSI and encourages a unified evaluation of lossy compression schemes to accelerate the clinical uptake of digital pathology.
arXiv Detail & Related papers (2024-12-17T18:04:33Z) - A Rate-Distortion-Classification Approach for Lossy Image Compression [0.0]
In lossy image compression, the objective is to achieve minimal signal distortion while compressing images to a specified bit rate.
To bridge the gap between image compression and visual analysis, we propose a Rate-Distortion-Classification (RDC) model for lossy image compression.
arXiv Detail & Related papers (2024-05-06T14:11:36Z) - Probing Image Compression For Class-Incremental Learning [8.711266563753846]
Continual machine learning (ML) systems rely on storing representative samples, also known as exemplars, within a limited memory constraint to maintain the performance on previously learned data.
In this paper, we explore the use of image compression as a strategy to enhance the buffer's capacity, thereby increasing exemplar diversity.
We introduce a new framework to incorporate image compression for continual ML including a pre-processing data compression step and an efficient compression rate/algorithm selection method.
arXiv Detail & Related papers (2024-03-10T18:58:14Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - Deep learning based Image Compression for Microscopy Images: An
Empirical Study [3.915183869199319]
This study analyzes classic and deep learning based image compression methods, and their impact on deep learning based image processing models.
To compress images in such a wanted way, multiple classical lossy image compression techniques are compared to several AI-based compression models.
We found that AI-based compression techniques largely outperform the classic ones and will minimally affect the downstream label-free task in 2D cases.
arXiv Detail & Related papers (2023-11-02T16:00:32Z) - Extreme Image Compression using Fine-tuned VQGANs [43.43014096929809]
We introduce vector quantization (VQ)-based generative models into the image compression domain.
The codebook learned by the VQGAN model yields a strong expressive capacity.
The proposed framework outperforms state-of-the-art codecs in terms of perceptual quality-oriented metrics.
arXiv Detail & Related papers (2023-07-17T06:14:19Z) - Machine Perception-Driven Image Compression: A Layered Generative
Approach [32.23554195427311]
layered generative image compression model is proposed to achieve high human vision-oriented image reconstructed quality.
Task-agnostic learning-based compression model is proposed, which effectively supports various compressed domain-based analytical tasks.
Joint optimization schedule is adopted to acquire best balance point among compression ratio, reconstructed image quality, and downstream perception performance.
arXiv Detail & Related papers (2023-04-14T02:12:38Z) - Crowd Counting on Heavily Compressed Images with Curriculum Pre-Training [90.76576712433595]
Applying lossy compression on images processed by deep neural networks can lead to significant accuracy degradation.
Inspired by the curriculum learning paradigm, we present a novel training approach called curriculum pre-training (CPT) for crowd counting on compressed images.
arXiv Detail & Related papers (2022-08-15T08:43:21Z) - Estimating the Resize Parameter in End-to-end Learned Image Compression [50.20567320015102]
We describe a search-free resizing framework that can further improve the rate-distortion tradeoff of recent learned image compression models.
Our results show that our new resizing parameter estimation framework can provide Bjontegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines.
arXiv Detail & Related papers (2022-04-26T01:35:02Z) - Analyzing and Mitigating JPEG Compression Defects in Deep Learning [69.04777875711646]
We present a unified study of the effects of JPEG compression on a range of common tasks and datasets.
We show that there is a significant penalty on common performance metrics for high compression.
arXiv Detail & Related papers (2020-11-17T20:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.