Prototype Knowledge Distillation for Medical Segmentation with Missing
Modality
- URL: http://arxiv.org/abs/2303.09830v2
- Date: Mon, 17 Apr 2023 07:53:32 GMT
- Title: Prototype Knowledge Distillation for Medical Segmentation with Missing
Modality
- Authors: Shuai Wang, Zipei Yan, Daoan Zhang, Haining Wei, Zhongsen Li, Rui Li
- Abstract summary: We propose a prototype knowledge distillation (ProtoKD) method to tackle the challenging problem.
Our ProtoKD can not only distillate the pixel-wise knowledge of multi-modality data to single-modality data but also transfer intra-class and inter-class feature variations.
Our method achieves state-of-the-art performance on BraTS benchmark.
- Score: 5.0043036421429035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-modality medical imaging is crucial in clinical treatment as it can
provide complementary information for medical image segmentation. However,
collecting multi-modal data in clinical is difficult due to the limitation of
the scan time and other clinical situations. As such, it is clinically
meaningful to develop an image segmentation paradigm to handle this missing
modality problem. In this paper, we propose a prototype knowledge distillation
(ProtoKD) method to tackle the challenging problem, especially for the toughest
scenario when only single modal data can be accessed. Specifically, our ProtoKD
can not only distillate the pixel-wise knowledge of multi-modality data to
single-modality data but also transfer intra-class and inter-class feature
variations, such that the student model could learn more robust feature
representation from the teacher model and inference with only one single
modality data. Our method achieves state-of-the-art performance on BraTS
benchmark. The code is available at
\url{https://github.com/SakurajimaMaiii/ProtoKD}.
Related papers
- Multimodal Masked Autoencoder Pre-training for 3D MRI-Based Brain Tumor Analysis with Missing Modalities [0.0]
BM-MAE is a masked image modeling pre-training strategy tailored for multimodal MRI data.
It seamlessly adapts to any combination of available modalities, extracting rich representations that capture both intra- and inter-modal information.
It can quickly and efficiently reconstruct missing modalities, highlighting its practical value.
arXiv Detail & Related papers (2025-05-01T14:51:30Z) - Partially Supervised Unpaired Multi-Modal Learning for Label-Efficient Medical Image Segmentation [53.723234136550055]
We term the new learning paradigm as Partially Supervised Unpaired Multi-Modal Learning (PSUMML)
We propose a novel Decomposed partial class adaptation with snapshot Ensembled Self-Training (DEST) framework for it.
Our framework consists of a compact segmentation network with modality specific normalization layers for learning with partially labeled unpaired multi-modal data.
arXiv Detail & Related papers (2025-03-07T07:22:42Z) - VISION-MAE: A Foundation Model for Medical Image Segmentation and
Classification [36.8105960525233]
We present a novel foundation model, VISION-MAE, specifically designed for medical imaging.
VISION-MAE is trained on a dataset of 2.5 million unlabeled images from various modalities.
It is then adapted to classification and segmentation tasks using explicit labels.
arXiv Detail & Related papers (2024-02-01T21:45:12Z) - Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - Modality-Agnostic Learning for Medical Image Segmentation Using
Multi-modality Self-distillation [1.815047691981538]
We propose a novel framework, Modality-Agnostic learning through Multi-modality Self-dist-illation (MAG-MS)
MAG-MS distills knowledge from the fusion of multiple modalities and applies it to enhance representation learning for individual modalities.
Our experiments on benchmark datasets demonstrate the high efficiency of MAG-MS and its superior segmentation performance.
arXiv Detail & Related papers (2023-06-06T14:48:50Z) - Medical Diagnosis with Large Scale Multimodal Transformers: Leveraging
Diverse Data for More Accurate Diagnosis [0.15776842283814416]
We present a new technical approach of "learnable synergies"
Our approach is easily scalable and naturally adapts to multimodal data inputs from clinical routine.
It outperforms state-of-the-art models in clinically relevant diagnosis tasks.
arXiv Detail & Related papers (2022-12-18T20:43:37Z) - Analysing the effectiveness of a generative model for semi-supervised
medical image segmentation [23.898954721893855]
State-of-the-art in automated segmentation remains supervised learning, employing discriminative models such as U-Net.
Semi-supervised learning (SSL) attempts to leverage the abundance of unlabelled data to obtain more robust and reliable models.
Deep generative models such as the SemanticGAN are truly viable alternatives to tackle challenging medical image segmentation problems.
arXiv Detail & Related papers (2022-11-03T15:19:59Z) - Understanding the Tricks of Deep Learning in Medical Image Segmentation:
Challenges and Future Directions [66.40971096248946]
In this paper, we collect a series of MedISeg tricks for different model implementation phases.
We experimentally explore the effectiveness of these tricks on consistent baselines.
We also open-sourced a strong MedISeg repository, where each component has the advantage of plug-and-play.
arXiv Detail & Related papers (2022-09-21T12:30:05Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Cross-Domain Segmentation with Adversarial Loss and Covariate Shift for
Biomedical Imaging [2.1204495827342438]
This manuscript aims to implement a novel model that can learn robust representations from cross-domain data by encapsulating distinct and shared patterns from different modalities.
The tests on CT and MRI liver data acquired in routine clinical trials show that the proposed model outperforms all other baseline with a large margin.
arXiv Detail & Related papers (2020-06-08T07:35:55Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.