A Value Mapping Virtual Staining Framework for Large-scale Histological Imaging
- URL: http://arxiv.org/abs/2501.03592v1
- Date: Tue, 07 Jan 2025 07:45:21 GMT
- Title: A Value Mapping Virtual Staining Framework for Large-scale Histological Imaging
- Authors: Junjia Wang, Bo Xiong, You Zhou, Xun Cao, Zhan Ma,
- Abstract summary: We introduce a general virtual staining framework that is adaptable to various conditions.
We propose a loss function based on the value mapping constraint to ensure the accuracy of virtual coloring between different pathological modalities.
- Score: 36.95712533471744
- License:
- Abstract: The emergence of virtual staining technology provides a rapid and efficient alternative for researchers in tissue pathology. It enables the utilization of unlabeled microscopic samples to generate virtual replicas of chemically stained histological slices, or facilitate the transformation of one staining type into another. The remarkable performance of generative networks, such as CycleGAN, offers an unsupervised learning approach for virtual coloring, overcoming the limitations of high-quality paired data required in supervised learning. Nevertheless, large-scale color transformation necessitates processing large field-of-view images in patches, often resulting in significant boundary inconsistency and artifacts. Additionally, the transformation between different colorized modalities typically needs further efforts to modify loss functions and tune hyperparameters for independent training of networks. In this study, we introduce a general virtual staining framework that is adaptable to various conditions. We propose a loss function based on the value mapping constraint to ensure the accuracy of virtual coloring between different pathological modalities, termed the Value Mapping Generative Adversarial Network (VM-GAN). Meanwhile, we present a confidence-based tiling method to address the challenge of boundary inconsistency arising from patch-wise processing. Experimental results on diverse data with varying staining protocols demonstrate that our method achieves superior quantitative indicators and improved visual perception.
Related papers
- Novel computational workflows for natural and biomedical image processing based on hypercomplex algebras [49.81327385913137]
Hypercomplex image processing extends conventional techniques in a unified paradigm encompassing algebraic and geometric principles.
This workleverages quaternions and the two-dimensional planes split framework (splitting of a quaternion - representing a pixel - into pairs of 2D planes) for natural/biomedical image analysis.
The proposed can regulate color appearance (e.g. with alternative renditions and grayscale conversion) and image contrast, be part of automated image processing pipelines.
arXiv Detail & Related papers (2025-02-11T18:38:02Z) - Stain-Invariant Representation for Tissue Classification in Histology Images [1.1624569521079424]
We propose a framework that generates stain-augmented versions of the training images using stain perturbation matrix.
We evaluate the performance of the proposed model on cross-domain multi-class tissue type classification of colorectal cancer images.
arXiv Detail & Related papers (2024-11-21T23:50:30Z) - AGMDT: Virtual Staining of Renal Histology Images with Adjacency-Guided
Multi-Domain Transfer [9.8359439975283]
We propose a novel virtual staining framework AGMDT to translate images into other domains by avoiding pixel-level alignment.
Based on it, the proposed framework AGMDT discovers patch-level aligned pairs across the serial slices of multi-domains through glomerulus detection and bipartite graph matching.
Experimental results show that the proposed AGMDT achieves a good balance between the precise pixel-level alignment and unpaired domain transfer.
arXiv Detail & Related papers (2023-09-12T17:37:56Z) - Deep Angiogram: Trivializing Retinal Vessel Segmentation [1.8479315677380455]
We propose a contrastive variational auto-encoder that can filter out irrelevant features and synthesize a latent image, named deep angiogram.
The generalizability of the synthetic network is improved by the contrastive loss that makes the model less sensitive to variations of image contrast and noisy features.
arXiv Detail & Related papers (2023-07-01T06:13:10Z) - Breaking Modality Disparity: Harmonized Representation for Infrared and
Visible Image Registration [66.33746403815283]
We propose a scene-adaptive infrared and visible image registration.
We employ homography to simulate the deformation between different planes.
We propose the first ground truth available misaligned infrared and visible image dataset.
arXiv Detail & Related papers (2023-04-12T06:49:56Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - HistoStarGAN: A Unified Approach to Stain Normalisation, Stain Transfer
and Stain Invariant Segmentation in Renal Histopathology [0.5505634045241288]
HistoStarGAN is a unified framework that performs stain transfer between multiple stainings.
It can serve as a synthetic data generator, which paves the way for the use of fully annotated synthetic image data.
arXiv Detail & Related papers (2022-10-18T12:22:26Z) - Towards Homogeneous Modality Learning and Multi-Granularity Information
Exploration for Visible-Infrared Person Re-Identification [16.22986967958162]
Visible-infrared person re-identification (VI-ReID) is a challenging and essential task, which aims to retrieve a set of person images over visible and infrared camera views.
Previous methods attempt to apply generative adversarial network (GAN) to generate the modality-consisitent data.
In this work, we address cross-modality matching problem with Aligned Grayscale Modality (AGM), an unified dark-line spectrum that reformulates visible-infrared dual-mode learning as a gray-gray single-mode learning problem.
arXiv Detail & Related papers (2022-04-11T03:03:19Z) - Texture Characterization of Histopathologic Images Using Ecological
Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
arXiv Detail & Related papers (2022-02-27T02:19:09Z) - Intriguing Properties of Vision Transformers [114.28522466830374]
Vision transformers (ViT) have demonstrated impressive performance across various machine vision problems.
We systematically study this question via an extensive set of experiments and comparisons with a high-performing convolutional neural network (CNN)
We show effective features of ViTs are due to flexible receptive and dynamic fields possible via the self-attention mechanism.
arXiv Detail & Related papers (2021-05-21T17:59:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.