Knowledge Distillation from Cross Teaching Teachers for Efficient
Semi-Supervised Abdominal Organ Segmentation in CT
- URL: http://arxiv.org/abs/2211.05942v1
- Date: Fri, 11 Nov 2022 01:20:55 GMT
- Title: Knowledge Distillation from Cross Teaching Teachers for Efficient
Semi-Supervised Abdominal Organ Segmentation in CT
- Authors: Jae Won Choi
- Abstract summary: This study proposes a coarse-to-fine framework with two teacher models and a student model that combines knowledge distillation and cross teaching, a consistency regularization based on pseudo-labels, for efficient semi-supervised learning.
The proposed method is demonstrated on the abdominal multi-organ segmentation task in CT images under the MICCAI FLARE 2022 challenge, with mean Dice scores of 0.8429 and 0.8520 in the validation and test sets, respectively.
- Score: 0.3959606869996231
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For more clinical applications of deep learning models for medical image
segmentation, high demands on labeled data and computational resources must be
addressed. This study proposes a coarse-to-fine framework with two teacher
models and a student model that combines knowledge distillation and cross
teaching, a consistency regularization based on pseudo-labels, for efficient
semi-supervised learning. The proposed method is demonstrated on the abdominal
multi-organ segmentation task in CT images under the MICCAI FLARE 2022
challenge, with mean Dice scores of 0.8429 and 0.8520 in the validation and
test sets, respectively.
Related papers
- REACT-KD: Region-Aware Cross-modal Topological Knowledge Distillation for Interpretable Medical Image Classification [2.195461571771795]
We introduce REACT-KD, a framework that transfers rich supervision from high-fidelity multi-modal sources into a lightweight CT-based student model.<n>The framework uses a dual teacher design: one branch captures structure-function relationships using dual-tracer PET/CT, and the other models dose-aware features through synthetically degraded low-dose CT data.<n>It achieves an average AUC of 93.4% on an internal PET/CT cohort and maintains 76.6% to 81.5% AUC across varying dose levels in external CT testing.
arXiv Detail & Related papers (2025-08-04T06:29:34Z) - A Self-training Framework for Semi-supervised Pulmonary Vessel Segmentation and Its Application in COPD [9.487894747353659]
The aim of this study was to segment the pulmonary vasculature using a semi-supervised method.<n>The proposed method, Semi2, significantly improves the precision of vessel segmentation by 2.3%, achieving a precision of 90.3%.
arXiv Detail & Related papers (2025-07-25T08:50:31Z) - Foundation Model for Whole-Heart Segmentation: Leveraging Student-Teacher Learning in Multi-Modal Medical Imaging [0.510750648708198]
Whole-heart segmentation from CT and MRI scans is crucial for cardiovascular disease analysis.
Existing methods struggle with modality-specific biases and the need for extensive labeled datasets.
We propose a foundation model for whole-heart segmentation using a self-supervised learning framework based on a student-teacher architecture.
arXiv Detail & Related papers (2025-03-24T14:47:54Z) - A Continual Learning-driven Model for Accurate and Generalizable Segmentation of Clinically Comprehensive and Fine-grained Whole-body Anatomies in CT [67.34586036959793]
There is no fully annotated CT dataset with all anatomies delineated for training.
We propose a novel continual learning-driven CT model that can segment complete anatomies.
Our single unified CT segmentation model, CL-Net, can highly accurately segment a clinically comprehensive set of 235 fine-grained whole-body anatomies.
arXiv Detail & Related papers (2025-03-16T23:55:02Z) - MEDFORM: A Foundation Model for Contrastive Learning of CT Imaging and Clinical Numeric Data in Multi-Cancer Analysis [0.562479170374811]
We propose MEDFORM, a multimodal pre-training strategy that guides CT image representation learning.
MEDFORM efficiently processes CT slice through multiple instance learning (MIL) and adopts a dual pre-training strategy.
Our model was pre-trained on three different cancer types: lung cancer (141,171 slices), breast cancer (8,100 slices), colorectal cancer (10,393 slices)
arXiv Detail & Related papers (2025-01-22T23:56:37Z) - PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - CC-DCNet: Dynamic Convolutional Neural Network with Contrastive Constraints for Identifying Lung Cancer Subtypes on Multi-modality Images [13.655407979403945]
We propose a novel deep learning network designed to accurately classify lung cancer subtype with multi-dimensional and multi-modality images.
The strength of the proposed model lies in its ability to dynamically process both paired CT-pathological image sets and independent CT image sets.
We also develop a contrastive constraint module, which quantitatively maps the cross-modality associations through network training.
arXiv Detail & Related papers (2024-07-18T01:42:00Z) - Cross-model Mutual Learning for Exemplar-based Medical Image Segmentation [25.874281336821685]
Cross-model Mutual learning framework for Exemplar-based Medical image (CMEMS)
We introduce a novel Cross-model Mutual learning framework for Exemplar-based Medical image (CMEMS)
arXiv Detail & Related papers (2024-04-18T00:18:07Z) - Class Activation Map-based Weakly supervised Hemorrhage Segmentation
using Resnet-LSTM in Non-Contrast Computed Tomography images [0.06269281581001895]
Intracranial hemorrhages (ICH) are routinely diagnosed using non-contrast CT (NCCT) for severity assessment.
Deep learning (DL)-based methods have shown great potential, however, training them requires a huge amount of manually annotated lesion-level labels.
We propose a novel weakly supervised DL method for ICH segmentation on NCCT scans, using image-level binary classification labels.
arXiv Detail & Related papers (2023-09-28T17:32:19Z) - Hybrid Representation-Enhanced Sampling for Bayesian Active Learning in
Musculoskeletal Segmentation of Lower Extremities [0.9287179270753105]
This study introduces a hybrid representation-enhanced sampling strategy that integrates both density and diversity criteria.
Experiments are performed on two lower extremity (LE) datasets of MRI and CT images.
arXiv Detail & Related papers (2023-07-26T06:52:29Z) - Self-supervised Model Based on Masked Autoencoders Advance CT Scans
Classification [0.0]
This paper is inspired by the self-supervised learning algorithm MAE.
It uses the MAE model pre-trained on ImageNet to perform transfer learning on CT Scans dataset.
This method improves the generalization performance of the model and avoids the risk of overfitting on small datasets.
arXiv Detail & Related papers (2022-10-11T00:52:05Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.