DeepContrast: Deep Tissue Contrast Enhancement using Synthetic Data
Degradations and OOD Model Predictions
- URL: http://arxiv.org/abs/2308.08365v1
- Date: Wed, 16 Aug 2023 13:40:01 GMT
- Title: DeepContrast: Deep Tissue Contrast Enhancement using Synthetic Data
Degradations and OOD Model Predictions
- Authors: Nuno Pimp\~ao Martins, Yannis Kalaidzidis, Marino Zerial, Florian Jug
- Abstract summary: We propose a new method to counteract blurring and contrast loss in microscopy images.
We first synthetically degraded the quality of microscopy images even further by using an approximate forward model for deep tissue image degradations.
We trained a neural network that learned the inverse of this degradation function from our generated pairs of raw and degraded images.
- Score: 6.550912532749276
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Microscopy images are crucial for life science research, allowing detailed
inspection and characterization of cellular and tissue-level structures and
functions. However, microscopy data are unavoidably affected by image
degradations, such as noise, blur, or others. Many such degradations also
contribute to a loss of image contrast, which becomes especially pronounced in
deeper regions of thick samples. Today, best performing methods to increase the
quality of images are based on Deep Learning approaches, which typically
require ground truth (GT) data during training. Our inability to counteract
blurring and contrast loss when imaging deep into samples prevents the
acquisition of such clean GT data. The fact that the forward process of
blurring and contrast loss deep into tissue can be modeled, allowed us to
propose a new method that can circumvent the problem of unobtainable GT data.
To this end, we first synthetically degraded the quality of microscopy images
even further by using an approximate forward model for deep tissue image
degradations. Then we trained a neural network that learned the inverse of this
degradation function from our generated pairs of raw and degraded images. We
demonstrated that networks trained in this way can be used out-of-distribution
(OOD) to improve the quality of less severely degraded images, e.g. the raw
data imaged in a microscope. Since the absolute level of degradation in such
microscopy images can be stronger than the additional degradation introduced by
our forward model, we also explored the effect of iterative predictions. Here,
we observed that in each iteration the measured image contrast kept improving
while detailed structures in the images got increasingly removed. Therefore,
dependent on the desired downstream analysis, a balance between contrast
improvement and retention of image details has to be found.
Related papers
- Phenotype-preserving metric design for high-content image reconstruction
by generative inpainting [0.0]
We evaluate the state-of-the-art inpainting methods for image restoration in a high-content fluorescence microscopy dataset of cultured cells.
We show that architectures like DeepFill V2 and Edge Connect can faithfully restore microscopy images upon fine-tuning with relatively little data.
arXiv Detail & Related papers (2023-07-26T18:13:16Z) - LUCYD: A Feature-Driven Richardson-Lucy Deconvolution Network [0.31402652384742363]
This paper proposes LUCYD, a novel method for the restoration of volumetric microscopy images.
Lucyd combines the Richardson-Lucy deconvolution formula and the fusion of deep features obtained by a fully convolutional network.
Our experiments indicate that LUCYD can significantly improve resolution, contrast, and overall quality of microscopy images.
arXiv Detail & Related papers (2023-07-16T10:34:23Z) - Generalizable Denoising of Microscopy Images using Generative
Adversarial Networks and Contrastive Learning [0.0]
We propose a novel framework for few-shot microscopy image denoising.
Our approach combines a generative adversarial network (GAN) trained via contrastive learning (CL) with two structure preserving loss terms.
We demonstrate the effectiveness of our method on three well-known microscopy imaging datasets.
arXiv Detail & Related papers (2023-03-27T13:55:07Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - Multiscale Structure Guided Diffusion for Image Deblurring [24.09642909404091]
Diffusion Probabilistic Models (DPMs) have been employed for image deblurring.
We introduce a simple yet effective multiscale structure guidance as an implicit bias.
We demonstrate more robust deblurring results with fewer artifacts on unseen data.
arXiv Detail & Related papers (2022-12-04T10:40:35Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Deep learning-based bias transfer for overcoming laboratory differences
of microscopic images [0.0]
We evaluate, compare, and improve existing generative model architectures to overcome domain shifts for immunofluorescence (IF) and Hematoxylin and Eosin (H&E) stained microscopy images.
Adapting the bias of the samples significantly improved the pixel-level segmentation for human kidney glomeruli and podocytes and improved the classification accuracy for human prostate biopsies by up to 14%.
arXiv Detail & Related papers (2021-05-25T09:02:30Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Data Consistent CT Reconstruction from Insufficient Data with Learned
Prior Images [70.13735569016752]
We investigate the robustness of deep learning in CT image reconstruction by showing false negative and false positive lesion cases.
We propose a data consistent reconstruction (DCR) method to improve their image quality, which combines the advantages of compressed sensing and deep learning.
The efficacy of the proposed method is demonstrated in cone-beam CT with truncated data, limited-angle data and sparse-view data, respectively.
arXiv Detail & Related papers (2020-05-20T13:30:49Z) - Invertible Image Rescaling [118.2653765756915]
We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
arXiv Detail & Related papers (2020-05-12T09:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.