Deep Multimodal Guidance for Medical Image Classification
- URL: http://arxiv.org/abs/2203.05683v1
- Date: Thu, 10 Mar 2022 23:50:08 GMT
- Title: Deep Multimodal Guidance for Medical Image Classification
- Authors: Mayur Mallya and Ghassan Hamarneh
- Abstract summary: We focus on the application of deep learning for image-based diagnosis.
We develop a light-weight guidance model that leverages the latent representation learned from the superior modality.
We show a boost in diagnostic performance of the inferior modality without requiring the superior modality.
- Score: 14.597243018813034
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Medical imaging is a cornerstone of therapy and diagnosis in modern medicine.
However, the choice of imaging modality for a particular theranostic task
typically involves trade-offs between the feasibility of using a particular
modality (e.g., short wait times, low cost, fast acquisition, reduced
radiation/invasiveness) and the expected performance on a clinical task (e.g.,
diagnostic accuracy, efficacy of treatment planning and guidance). In this
work, we aim to apply the knowledge learned from the less feasible but
better-performing (superior) modality to guide the utilization of the
more-feasible yet under-performing (inferior) modality and steer it towards
improved performance. We focus on the application of deep learning for
image-based diagnosis. We develop a light-weight guidance model that leverages
the latent representation learned from the superior modality, when training a
model that consumes only the inferior modality. We examine the advantages of
our method in the context of two clinical applications: multi-task skin lesion
classification from clinical and dermoscopic images and brain tumor
classification from multi-sequence magnetic resonance imaging (MRI) and
histopathology images. For both these scenarios we show a boost in diagnostic
performance of the inferior modality without requiring the superior modality.
Furthermore, in the case of brain tumor classification, our method outperforms
the model trained on the superior modality while producing comparable results
to the model that uses both modalities during inference.
Related papers
- A Clinical-oriented Multi-level Contrastive Learning Method for Disease Diagnosis in Low-quality Medical Images [4.576524795036682]
Disease diagnosis methods guided by contrastive learning (CL) have shown significant advantages in lesion feature representation.
We propose a clinical-oriented multi-level CL framework that aims to enhance the model's capacity to extract lesion features.
The proposed CL framework is validated on two public medical image datasets, EyeQ and Chest X-ray.
arXiv Detail & Related papers (2024-04-07T09:08:14Z) - Boosting Few-Shot Learning with Disentangled Self-Supervised Learning and Meta-Learning for Medical Image Classification [8.975676404678374]
We present a strategy for improving the performance and generalization capabilities of models trained in low-data regimes.
The proposed method starts with a pre-training phase, where features learned in a self-supervised learning setting are disentangled to improve the robustness of the representations for downstream tasks.
We then introduce a meta-fine-tuning step, leveraging related classes between meta-training and meta-testing phases but varying the level.
arXiv Detail & Related papers (2024-03-26T09:36:20Z) - Optimizing Skin Lesion Classification via Multimodal Data and Auxiliary
Task Integration [54.76511683427566]
This research introduces a novel multimodal method for classifying skin lesions, integrating smartphone-captured images with essential clinical and demographic information.
A distinctive aspect of this method is the integration of an auxiliary task focused on super-resolution image prediction.
The experimental evaluations have been conducted using the PAD-UFES20 dataset, applying various deep-learning architectures.
arXiv Detail & Related papers (2024-02-16T05:16:20Z) - Learned Image resizing with efficient training (LRET) facilitates
improved performance of large-scale digital histopathology image
classification models [0.0]
Histologic examination plays a crucial role in oncology research and diagnostics.
Current approaches to training deep convolutional neural networks (DCNN) result in suboptimal model performance.
We introduce a novel approach that addresses the main limitations of traditional histopathology classification model training.
arXiv Detail & Related papers (2024-01-19T23:45:47Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Causality-Driven One-Shot Learning for Prostate Cancer Grading from MRI [1.049712834719005]
We present a novel method to automatically classify medical images that learns and leverages weak causal signals in the image.
Our framework consists of a convolutional neural network backbone and a causality-extractor module.
Our findings show that causal relationships among features play a crucial role in enhancing the model's ability to discern relevant information.
arXiv Detail & Related papers (2023-09-19T16:08:33Z) - Modality-Agnostic Learning for Medical Image Segmentation Using
Multi-modality Self-distillation [1.815047691981538]
We propose a novel framework, Modality-Agnostic learning through Multi-modality Self-dist-illation (MAG-MS)
MAG-MS distills knowledge from the fusion of multiple modalities and applies it to enhance representation learning for individual modalities.
Our experiments on benchmark datasets demonstrate the high efficiency of MAG-MS and its superior segmentation performance.
arXiv Detail & Related papers (2023-06-06T14:48:50Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - SSD-KD: A Self-supervised Diverse Knowledge Distillation Method for
Lightweight Skin Lesion Classification Using Dermoscopic Images [62.60956024215873]
Skin cancer is one of the most common types of malignancy, affecting a large population and causing a heavy economic burden worldwide.
Most studies in skin cancer detection keep pursuing high prediction accuracies without considering the limitation of computing resources on portable devices.
This study specifically proposes a novel method, termed SSD-KD, that unifies diverse knowledge into a generic KD framework for skin diseases classification.
arXiv Detail & Related papers (2022-03-22T06:54:29Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.