Cross-system biological image quality enhancement based on the generative adversarial network as a foundation for establishing a multi-institute microscopy cooperative network
- URL: http://arxiv.org/abs/2403.18026v1
- Date: Tue, 26 Mar 2024 18:23:31 GMT
- Title: Cross-system biological image quality enhancement based on the generative adversarial network as a foundation for establishing a multi-institute microscopy cooperative network
- Authors: Dominik Panek, Carina Rząca, Maksymilian Szczypior, Joanna Sorysz, Krzysztof Misztal, Zbigniew Baster, Zenon Rajfur,
- Abstract summary: High-quality fluorescence imaging of biological systems is limited by processes like photobleaching and phototoxicity.
We propose a generative-adversarial network (GAN) for contrast transfer between two different microscopy systems.
Our model proves that such transfer is possible, allowing us to receive HQ-generated images characterized by low mean squared error (MSE) values, high structural similarity index (SSIM) and high peak signal-to-noise ratio (PSNR) values.
- Score: 0.5235143203977018
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: High-quality fluorescence imaging of biological systems is limited by processes like photobleaching and phototoxicity, and also in many cases, by limited access to the latest generations of microscopes. Moreover, low temporal resolution can lead to a motion blur effect in living systems. Our work presents a deep learning (DL) generative-adversarial approach to the problem of obtaining high-quality (HQ) images based on their low-quality (LQ) equivalents. We propose a generative-adversarial network (GAN) for contrast transfer between two different separate microscopy systems: a confocal microscope (producing HQ images) and a wide-field fluorescence microscope (producing LQ images). Our model proves that such transfer is possible, allowing us to receive HQ-generated images characterized by low mean squared error (MSE) values, high structural similarity index (SSIM), and high peak signal-to-noise ratio (PSNR) values. For our best model in the case of comparing HQ-generated images and HQ-ground truth images, the median values of the metrics are 6x10-4, 0.9413, and 31.87, for MSE, SSIM, and PSNR, respectively. In contrast, in the case of comparison between LQ and HQ ground truth median values of the metrics are equal to 0.0071, 0.8304, and 21.48 for MSE, SSIM, and PSNR respectively. Therefore, we observe a significant increase ranging from 14% to 49% for SSIM and PSNR respectively. These results, together with other single-system cross-modality studies, provide proof of concept for further implementation of a cross-system biological image quality enhancement.
Related papers
- Biology-driven assessment of deep learning super-resolution imaging of the porosity network in dentin [3.6401695744986866]
The mechanosensory system of teeth is believed to partly rely on Odontoblast cells stimulation by fluid flow through a porosity network extending through dentin.<n>Visualizing the smallest sub-microscopic porosity vessels requires the highest achievable resolution from confocal fluorescence microscopy.<n>We tested different deep learning (DL) super-resolution (SR) models to allow faster experimental acquisitions of lower resolution images and restore optimal image quality by post-processing.
arXiv Detail & Related papers (2025-10-09T16:26:38Z) - In silico Deep Learning Protocols for Label-Free Super-Resolution Microscopy: A Comparative Study of Network Architectures and SNR Dependence [6.165323448459655]
A key limitation often cited by optical microscopists refers to the limit of its lateral resolution.<n>This study seeks to evaluate an alternative & economical approach to achieving super-resolution (SR) optical microscopy.
arXiv Detail & Related papers (2025-09-23T07:32:40Z) - A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
Deep neural networks have shown great potential for reconstructing high-fidelity images from undersampled measurements.
Our model is based on neural operators, a discretization-agnostic architecture.
Our inference speed is also 1,400x faster than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Comparison of No-Reference Image Quality Models via MAP Estimation in
Diffusion Latents [99.19391983670569]
We show that NR-IQA models can be plugged into the maximum a posteriori (MAP) estimation framework for image enhancement.
Different NR-IQA models are likely to induce different enhanced images, which are ultimately subject to psychophysical testing.
This leads to a new computational method for comparing NR-IQA models within the analysis-by-synthesis framework.
arXiv Detail & Related papers (2024-03-11T03:35:41Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - Evaluating the Quality and Diversity of DCGAN-based Generatively
Synthesized Diabetic Retinopathy Imagery [0.07499722271664144]
Publicly available diabetic retinopathy (DR) datasets are imbalanced, containing limited numbers of images with DR.
The imbalance can be addressed using Geneversarative Adrial Networks (GANs) to augment the datasets with synthetic images.
To evaluate the quality and diversity of synthetic images, several evaluation metrics, such as Multi-Scale Structural Similarity Index (MS-SSIM), Cosine Distance (CD), and Fr't Inception Distance (FID) are used.
arXiv Detail & Related papers (2022-08-10T23:50:01Z) - Flow-based Visual Quality Enhancer for Super-resolution Magnetic
Resonance Spectroscopic Imaging [13.408365072149795]
We propose a flow-based enhancer network to improve the visual quality of super-resolution MRSI.
Our enhancer network incorporates anatomical information from additional image modalities (MRI) and uses a learnable base distribution.
Our method also allows visual quality adjustment and uncertainty estimation.
arXiv Detail & Related papers (2022-07-20T20:19:44Z) - Automated SSIM Regression for Detection and Quantification of Motion
Artefacts in Brain MR Images [54.739076152240024]
Motion artefacts in magnetic resonance brain images are a crucial issue.
The assessment of MR image quality is fundamental before proceeding with the clinical diagnosis.
An automated image quality assessment based on the structural similarity index (SSIM) regression has been proposed here.
arXiv Detail & Related papers (2022-06-14T10:16:54Z) - Deep learning-based bias transfer for overcoming laboratory differences
of microscopic images [0.0]
We evaluate, compare, and improve existing generative model architectures to overcome domain shifts for immunofluorescence (IF) and Hematoxylin and Eosin (H&E) stained microscopy images.
Adapting the bias of the samples significantly improved the pixel-level segmentation for human kidney glomeruli and podocytes and improved the classification accuracy for human prostate biopsies by up to 14%.
arXiv Detail & Related papers (2021-05-25T09:02:30Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Comparisons among different stochastic selection of activation layers
for convolutional neural networks for healthcare [77.99636165307996]
We classify biomedical images using ensembles of neural networks.
We select our activations among the following ones: ReLU, leaky ReLU, Parametric ReLU, ELU, Adaptive Piecewice Linear Unit, S-Shaped ReLU, Swish, Mish, Mexican Linear Unit, Parametric Deformable Linear Unit, Soft Root Sign.
arXiv Detail & Related papers (2020-11-24T01:53:39Z) - Augmented Equivariant Attention Networks for Microscopy Image
Reconstruction [44.965820245167635]
It is time-consuming and expensive to take high-quality or high-resolution electron microscopy (EM) and fluorescence microscopy (FM) images.
Deep learning enables us to perform image-to-image transformation tasks for various types of microscopy image reconstruction.
We propose the augmented equivariant attention networks (AEANets) with better capability to capture inter-image dependencies.
arXiv Detail & Related papers (2020-11-06T23:37:49Z) - Photoacoustic Microscopy with Sparse Data Enabled by Convolutional
Neural Networks for Fast Imaging [0.9786690381850356]
Photoacoustic microscopy (PAM) has been a promising biomedical imaging technology in recent years.
Reducing sampling density can naturally shorten image acquisition time, which is at the cost of image quality.
We propose a method using convolutional neural networks (CNNs) to improve the quality of sparse PAM images.
arXiv Detail & Related papers (2020-06-08T05:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.