Robust and Efficient Segmentation of Cross-domain Medical Images
- URL: http://arxiv.org/abs/2207.12995v1
- Date: Tue, 26 Jul 2022 15:55:36 GMT
- Title: Robust and Efficient Segmentation of Cross-domain Medical Images
- Authors: Xingqun Qi, Zhuojie Wu, Min Ren, Muyi Sun, Zhenan Sun
- Abstract summary: We propose a generalizable knowledge distillation method for robust and efficient segmentation of medical images.
We propose two generalizable knowledge distillation schemes, Dual Contrastive Graph Distillation (DCGD) and Domain-Invariant Cross Distillation (DICD)
In DICD, the domain-invariant semantic vectors from the two models (i.e., teacher and student) are leveraged to cross-reconstruct features by the header exchange of MSAN.
- Score: 37.38861543166964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficient medical image segmentation aims to provide accurate pixel-wise
prediction for the medical images with the lightweight implementation
framework. However, lightweight frameworks generally fail to achieve high
performance, and suffer from the poor generalizable ability on cross-domain
tasks.In this paper, we propose a generalizable knowledge distillation method
for robust and efficient segmentation of cross-domain medical images.
Primarily, we propose the Model-Specific Alignment Networks (MSAN) to provide
the domain-invariant representations which are regularized by a Pre-trained
Semantic AutoEncoder (P-SAE). Meanwhile, a customized Alignment Consistency
Training (ACT) strategy is designed to promote the MSAN training. With the
domain-invariant representative vectors in MSAN, we propose two generalizable
knowledge distillation schemes, Dual Contrastive Graph Distillation (DCGD) and
Domain-Invariant Cross Distillation (DICD). Specifically, in DCGD, two types of
implicit contrastive graphs are designed to represent the intra-coupling and
inter-coupling semantic correlations from the perspective of data distribution.
In DICD, the domain-invariant semantic vectors from the two models (i.e.,
teacher and student) are leveraged to cross-reconstruct features by the header
exchange of MSAN, which achieves generalizable improvement for both the encoder
and decoder in the student model. Furthermore, a metric named Fr\'echet
Semantic Distance (FSD) is tailored to verify the effectiveness of the
regularized domain-invariant features. Extensive experiments conducted on the
Liver and Retinal Vessel Segmentation datasets demonstrate the priority of our
method, in terms of performance and generalization on lightweight frameworks.
Related papers
- PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - IDEAL: Improved DEnse locAL Contrastive Learning for Semi-Supervised
Medical Image Segmentation [3.6748639131154315]
We extend the concept of metric learning to the segmentation task.
We propose a simple convolutional projection head for obtaining dense pixel-level features.
A bidirectional regularization mechanism involving two-stream regularization training is devised for the downstream task.
arXiv Detail & Related papers (2022-10-26T23:11:02Z) - Unsupervised Domain Adaptation for Cross-Modality Retinal Vessel
Segmentation via Disentangling Representation Style Transfer and
Collaborative Consistency Learning [3.9562534927482704]
We propose DCDA, a novel cross-modality unsupervised domain adaptation framework for tasks with large domain shifts.
Our framework achieves Dice scores close to target-trained oracle both from OCTA to OCT and from OCT to OCTA, significantly outperforming other state-of-the-art methods.
arXiv Detail & Related papers (2022-01-13T07:03:16Z) - Unsupervised Domain Adaptation with Variational Approximation for
Cardiac Segmentation [15.2292571922932]
Unsupervised domain adaptation is useful in medical image segmentation.
We propose a new framework, where the latent features of both domains are driven towards a common and parameterized variational form.
This is achieved by two networks based on variational auto-encoders (VAEs) and a regularization for this variational approximation.
arXiv Detail & Related papers (2021-06-16T13:00:39Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.