Knowledge distillation from multi-modal to mono-modal segmentation
networks
- URL: http://arxiv.org/abs/2106.09564v1
- Date: Thu, 17 Jun 2021 14:46:57 GMT
- Title: Knowledge distillation from multi-modal to mono-modal segmentation
networks
- Authors: Minhao Hu, Matthis Maillard, Ya Zhang, Tommaso Ciceri, Giammarco La
Barbera, Isabelle Bloch, Pietro Gori
- Abstract summary: We propose KD-Net, a framework to transfer knowledge from a trained multi-modal network (teacher) to a mono-modal one (student)
We show that the student network effectively learns from the teacher and always outperforms the baseline mono-modal network in terms of segmentation accuracy.
- Score: 13.213798509506272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The joint use of multiple imaging modalities for medical image segmentation
has been widely studied in recent years. The fusion of information from
different modalities has demonstrated to improve the segmentation accuracy,
with respect to mono-modal segmentations, in several applications. However,
acquiring multiple modalities is usually not possible in a clinical setting due
to a limited number of physicians and scanners, and to limit costs and scan
time. Most of the time, only one modality is acquired. In this paper, we
propose KD-Net, a framework to transfer knowledge from a trained multi-modal
network (teacher) to a mono-modal one (student). The proposed method is an
adaptation of the generalized distillation framework where the student network
is trained on a subset (1 modality) of the teacher's inputs (n modalities). We
illustrate the effectiveness of the proposed framework in brain tumor
segmentation with the BraTS 2018 dataset. Using different architectures, we
show that the student network effectively learns from the teacher and always
outperforms the baseline mono-modal network in terms of segmentation accuracy.
Related papers
- MultiTalent: A Multi-Dataset Approach to Medical Image Segmentation [1.146419670457951]
Current practices limit model training and supervised pre-training to one or a few similar datasets.
We propose MultiTalent, a method that leverages multiple CT datasets with diverse and conflicting class definitions.
arXiv Detail & Related papers (2023-03-25T11:37:16Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - A Multi-modal Fusion Framework Based on Multi-task Correlation Learning
for Cancer Prognosis Prediction [8.476394437053477]
We present a multi-modal fusion framework based on multi-task correlation learning (MultiCoFusion) for survival analysis and cancer grade classification.
We systematically evaluate our framework using glioma datasets from The Cancer Genome Atlas (TCGA)
arXiv Detail & Related papers (2022-01-22T15:16:24Z) - Deep Class-Specific Affinity-Guided Convolutional Network for Multimodal
Unpaired Image Segmentation [7.021001169318551]
Multi-modal medical image segmentation plays an essential role in clinical diagnosis.
It remains challenging as the input modalities are often not well-aligned spatially.
We propose an affinity-guided fully convolutional network for multimodal image segmentation.
arXiv Detail & Related papers (2021-01-05T13:56:51Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.