Multimodal Pathology Image Search Between H&E Slides and Multiplexed
Immunofluorescent Images
- URL: http://arxiv.org/abs/2306.06780v1
- Date: Sun, 11 Jun 2023 21:30:20 GMT
- Title: Multimodal Pathology Image Search Between H&E Slides and Multiplexed
Immunofluorescent Images
- Authors: Amir Hajighasemi, MD Jillur Rahman Saurav, Mohammad S Nasr, Jai
Prakash Veerla, Aarti Darji, Parisa Boodaghi Malidarreh, Michael Robben,
Helen H Shang, Jacob M Luber
- Abstract summary: We present an approach for multimodal pathology image search using dynamic time warping (DTW) on Variational Autoencoder (VAE) latent space.
Through training the VAE and applying DTW, we align and compare mIF and H&E slides.
Our method improves differential diagnosis and therapeutic decisions by integrating morphological H&E data with immunophenotyping from mIF.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present an approach for multimodal pathology image search, using dynamic
time warping (DTW) on Variational Autoencoder (VAE) latent space that is fed
into a ranked choice voting scheme to retrieve multiplexed immunofluorescent
imaging (mIF) that is most similar to a query H&E slide. Through training the
VAE and applying DTW, we align and compare mIF and H&E slides. Our method
improves differential diagnosis and therapeutic decisions by integrating
morphological H&E data with immunophenotyping from mIF, providing clinicians a
rich perspective of disease states. This facilitates an understanding of the
spatial relationships in tissue samples and could revolutionize the diagnostic
process, enhancing precision and enabling personalized therapy selection. Our
technique demonstrates feasibility using colorectal cancer and healthy tonsil
samples. An exhaustive ablation study was conducted on a search engine designed
to explore the correlation between multiplexed Immunofluorescence (mIF) and
Hematoxylin and Eosin (H&E) staining, in order to validate its ability to map
these distinct modalities into a unified vector space. Despite extreme class
imbalance, the system demonstrated robustness and utility by returning similar
results across various data features, which suggests potential for future use
in multimodal histopathology data analysis.
Related papers
- Multiscale Latent Diffusion Model for Enhanced Feature Extraction from Medical Images [5.395912799904941]
variations in CT scanner models and acquisition protocols introduce significant variability in the extracted radiomic features.
LTDiff++ is a multiscale latent diffusion model designed to enhance feature extraction in medical imaging.
arXiv Detail & Related papers (2024-10-05T02:13:57Z) - VALD-MD: Visual Attribution via Latent Diffusion for Medical Diagnostics [0.0]
Visual attribution in medical imaging seeks to make evident the diagnostically-relevant components of a medical image.
We here present a novel generative visual attribution technique, one that leverages latent diffusion models in combination with domain-specific large language models.
The resulting system also exhibits a range of latent capabilities including zero-shot localized disease induction.
arXiv Detail & Related papers (2024-01-02T19:51:49Z) - Diagnosis Of Takotsubo Syndrome By Robust Feature Selection From The
Complex Latent Space Of DL-based Segmentation Network [4.583480375083946]
Using classification or segmentation models on medical to learn latent features opt out robust feature selection and may lead to overfitting.
We propose a novel feature selection technique using the latent space of a segmentation model that can aid diagnosis.
Our approach shows promising results in differential diagnosis of a rare cardiac disease with 82% diagnosis accuracy beating the previous state-of-the-art (SOTA) approach.
arXiv Detail & Related papers (2023-12-19T22:53:32Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - MedSegDiff-V2: Diffusion based Medical Image Segmentation with
Transformer [53.575573940055335]
We propose a novel Transformer-based Diffusion framework, called MedSegDiff-V2.
We verify its effectiveness on 20 medical image segmentation tasks with different image modalities.
arXiv Detail & Related papers (2023-01-19T03:42:36Z) - Texture Characterization of Histopathologic Images Using Ecological
Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
arXiv Detail & Related papers (2022-02-27T02:19:09Z) - Fusion of medical imaging and electronic health records with attention
and multi-head machanisms [4.433829714749366]
We propose a multi-modal attention module which use EHR data to help the selection of important regions during image feature extraction process.
We also propose to incorporate multi-head machnism to gated multimodal unit (GMU) to make it able to parallelly fuse image and EHR features in different subspaces.
Experiments on predicting Glasgow outcome scale (GOS) of intracerebral hemorrhage patients and classifying Alzheimer's Disease showed the proposed method can automatically focus on task-related areas.
arXiv Detail & Related papers (2021-12-22T07:39:26Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z) - Diffusion-Weighted Magnetic Resonance Brain Images Generation with
Generative Adversarial Networks and Variational Autoencoders: A Comparison
Study [55.78588835407174]
We show that high quality, diverse and realistic-looking diffusion-weighted magnetic resonance images can be synthesized using deep generative models.
We present two networks, the Introspective Variational Autoencoder and the Style-Based GAN, that qualify for data augmentation in the medical field.
arXiv Detail & Related papers (2020-06-24T18:00:01Z) - Synergic Adversarial Label Learning for Grading Retinal Diseases via
Knowledge Distillation and Multi-task Learning [29.46896757506273]
Well-qualified doctors annotated images are very expensive and only a limited amount of data is available for various retinal diseases.
Some studies show that AMD and DR share some common features like hemorrhagic points and exudation but most classification algorithms only train those disease models independently.
We propose a method called synergic adversarial label learning (SALL) which leverages relevant retinal disease labels in both semantic and feature space as additional signals and train the model in a collaborative manner.
arXiv Detail & Related papers (2020-03-24T01:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.