ImplicitStainer: Data-Efficient Medical Image Translation for Virtual Antibody-based Tissue Staining Using Local Implicit Functions
- URL: http://arxiv.org/abs/2505.09831v1
- Date: Wed, 14 May 2025 22:22:52 GMT
- Title: ImplicitStainer: Data-Efficient Medical Image Translation for Virtual Antibody-based Tissue Staining Using Local Implicit Functions
- Authors: Tushar Kataria, Beatrice Knudsen, Shireen Y. Elhabian,
- Abstract summary: Hematoxylin and eosin (H&E) staining is a gold standard for microscopic diagnosis in pathology.<n>Hematoxylin and eosin (H&E) staining is a gold standard for microscopic diagnosis in pathology.<n>Hematoxylin and eosin (H&E) staining does not capture all the diagnostic information that may be needed.
- Score: 1.9029890402585894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hematoxylin and eosin (H&E) staining is a gold standard for microscopic diagnosis in pathology. However, H&E staining does not capture all the diagnostic information that may be needed. To obtain additional molecular information, immunohistochemical (IHC) stains highlight proteins that mark specific cell types, such as CD3 for T-cells or CK8/18 for epithelial cells. While IHC stains are vital for prognosis and treatment guidance, they are typically only available at specialized centers and time consuming to acquire, leading to treatment delays for patients. Virtual staining, enabled by deep learning-based image translation models, provides a promising alternative by computationally generating IHC stains from H&E stained images. Although many GAN and diffusion based image to image (I2I) translation methods have been used for virtual staining, these models treat image patches as independent data points, which results in increased and more diverse data requirements for effective generation. We present ImplicitStainer, a novel approach that leverages local implicit functions to improve image translation, specifically virtual staining performance, by focusing on pixel-level predictions. This method enhances robustness to variations in dataset sizes, delivering high-quality results even with limited data. We validate our approach on two datasets using a comprehensive set of metrics and benchmark it against over fifteen state-of-the-art GAN- and diffusion based models. Full Code and models trained will be released publicly via Github upon acceptance.
Related papers
- PixCell: A generative foundation model for digital histopathology images [49.00921097924924]
We introduce PixCell, the first diffusion-based generative foundation model for histopathology.<n>We train PixCell on PanCan-30M, a vast, diverse dataset derived from 69,184 H&E-stained whole slide images covering various cancer types.
arXiv Detail & Related papers (2025-06-05T15:14:32Z) - SCFANet: Style Distribution Constraint Feature Alignment Network For Pathological Staining Translation [0.11999555634662631]
Style Distribution Constraint Feature Alignment Network (SCFANet)<n>SCFANet incorporates two innovative modules: the Style Distribution Constrainer (SDC) and Feature Alignment Learning (FAL)<n>Our SCFANet model outperforms existing methods, achieving precise transformation of H&E-stained images into their IHC-stained counterparts.
arXiv Detail & Related papers (2025-04-01T07:29:53Z) - Unsupervised Latent Stain Adaptation for Computational Pathology [2.483372684394528]
Stain adaptation aims to reduce the generalization error between different stains by training a model on source stains that generalizes to target stains.
We propose a joint training between artificially labeled and unlabeled data including all available stained images called Unsupervised Latent Stain Adaptation (ULSA)
Our method uses stain translation to enrich labeled source images with synthetic target images in order to increase the supervised signals.
arXiv Detail & Related papers (2024-06-27T11:08:42Z) - StainDiffuser: MultiTask Dual Diffusion Model for Virtual Staining [1.9029890402585894]
Hematoxylin and Eosin (H&E) staining is the most commonly used for disease diagnosis and tumor recurrence tracking.
Deep learning models have made Image-to-Image (I2I) translation a key research area, reducing the need for expensive physical staining processes.
We propose StainDiffuser, a novel dual diffusion architecture for virtual staining that converges under a limited training budget.
arXiv Detail & Related papers (2024-03-17T20:47:52Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Structural Cycle GAN for Virtual Immunohistochemistry Staining of Gland
Markers in the Colon [1.741980945827445]
Hematoxylin and Eosin (H&E) staining is one of the most frequently used stains for disease analysis, diagnosis, and grading.
Pathologists do need differentchemical (IHC) stains to analyze specific structures or cells.
Hematoxylin and Eosin (H&E) staining is one of the most frequently used stains for disease analysis, diagnosis, and grading.
arXiv Detail & Related papers (2023-08-25T05:24:23Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - PrepNet: A Convolutional Auto-Encoder to Homogenize CT Scans for
Cross-Dataset Medical Image Analysis [0.22485007639406518]
COVID-19 diagnosis can now be done efficiently using PCR tests, but this use case exemplifies the need for a methodology to overcome data variability issues.
We propose a novel generative approach that aims at erasing the differences induced by e.g. the imaging technology while simultaneously introducing minimal changes to the CT scans.
arXiv Detail & Related papers (2022-08-19T15:49:47Z) - Lymphocyte Classification in Hyperspectral Images of Ovarian Cancer
Tissue Biopsy Samples [94.37521840642141]
We present a machine learning pipeline to segment white blood cell pixels in hyperspectral images of biopsy cores.
These cells are clinically important for diagnosis, but some prior work has struggled to incorporate them due to difficulty obtaining precise pixel labels.
arXiv Detail & Related papers (2022-03-23T00:58:27Z) - Texture Characterization of Histopathologic Images Using Ecological
Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
arXiv Detail & Related papers (2022-02-27T02:19:09Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.