Dual-Teacher++: Exploiting Intra-domain and Inter-domain Knowledge with
Reliable Transfer for Cardiac Segmentation
- URL: http://arxiv.org/abs/2101.02375v1
- Date: Thu, 7 Jan 2021 05:17:38 GMT
- Title: Dual-Teacher++: Exploiting Intra-domain and Inter-domain Knowledge with
Reliable Transfer for Cardiac Segmentation
- Authors: Kang Li, Shujun Wang, Lequan Yu, Pheng-Ann Heng
- Abstract summary: We propose a cutting-edge semi-supervised domain adaptation framework, namely Dual-Teacher++.
We design novel dual teacher models, including an inter-domain teacher model to explore cross-modality priors from source domain (e.g., MR) and an intra-domain teacher model to investigate the knowledge beneath unlabeled target domain.
In this way, the student model can obtain reliable dual-domain knowledge and yield improved performance on target domain data.
- Score: 69.09432302497116
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Annotation scarcity is a long-standing problem in medical image analysis
area. To efficiently leverage limited annotations, abundant unlabeled data are
additionally exploited in semi-supervised learning, while well-established
cross-modality data are investigated in domain adaptation. In this paper, we
aim to explore the feasibility of concurrently leveraging both unlabeled data
and cross-modality data for annotation-efficient cardiac segmentation. To this
end, we propose a cutting-edge semi-supervised domain adaptation framework,
namely Dual-Teacher++. Besides directly learning from limited labeled target
domain data (e.g., CT) via a student model adopted by previous literature, we
design novel dual teacher models, including an inter-domain teacher model to
explore cross-modality priors from source domain (e.g., MR) and an intra-domain
teacher model to investigate the knowledge beneath unlabeled target domain. In
this way, the dual teacher models would transfer acquired inter- and
intra-domain knowledge to the student model for further integration and
exploitation. Moreover, to encourage reliable dual-domain knowledge transfer,
we enhance the inter-domain knowledge transfer on the samples with higher
similarity to target domain after appearance alignment, and also strengthen
intra-domain knowledge transfer of unlabeled target data with higher prediction
confidence. In this way, the student model can obtain reliable dual-domain
knowledge and yield improved performance on target domain data. We extensively
evaluated the feasibility of our method on the MM-WHS 2017 challenge dataset.
The experiments have demonstrated the superiority of our framework over other
semi-supervised learning and domain adaptation methods. Moreover, our
performance gains could be yielded in bidirections,i.e., adapting from MR to
CT, and from CT to MR.
Related papers
- Direct Distillation between Different Domains [97.39470334253163]
We propose a new one-stage method dubbed Direct Distillation between Different Domains" (4Ds)
We first design a learnable adapter based on the Fourier transform to separate the domain-invariant knowledge from the domain-specific knowledge.
We then build a fusion-activation mechanism to transfer the valuable domain-invariant knowledge to the student network.
arXiv Detail & Related papers (2024-01-12T02:48:51Z) - Adaptive Hierarchical Dual Consistency for Semi-Supervised Left Atrium
Segmentation on Cross-Domain Data [8.645556125521246]
Generalising semi-supervised learning to cross-domain data is of high importance to improve model robustness.
The AHDC consists of a Bidirectional Adversarial Inference module (BAI) and a Hierarchical Dual Consistency learning module (HDC)
We demonstrate the performance of our proposed AHDC on four 3D late gadolinium enhancement cardiac MR (LGE-CMR) datasets from different centres and a 3D CT dataset.
arXiv Detail & Related papers (2021-09-17T02:15:10Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z) - Dual-Teacher: Integrating Intra-domain and Inter-domain Teachers for
Annotation-efficient Cardiac Segmentation [65.81546955181781]
We propose a novel semi-supervised domain adaptation approach, namely Dual-Teacher.
The student model learns the knowledge of unlabeled target data and labeled source data by two teacher models.
We demonstrate that our approach is able to concurrently utilize unlabeled data and cross-modality data with superior performance.
arXiv Detail & Related papers (2020-07-13T10:00:44Z) - Unsupervised Domain Adaptation with Multiple Domain Discriminators and
Adaptive Self-Training [22.366638308792734]
Unsupervised Domain Adaptation (UDA) aims at improving the generalization capability of a model trained on a source domain to perform well on a target domain for which no labeled data is available.
We propose an approach to adapt a deep neural network trained on synthetic data to real scenes addressing the domain shift between the two different data distributions.
arXiv Detail & Related papers (2020-04-27T11:48:03Z) - Domain Adaption for Knowledge Tracing [65.86619804954283]
We propose a novel adaptable framework, namely knowledge tracing (AKT) to address the DAKT problem.
For the first aspect, we incorporate the educational characteristics (e.g., slip, guess, question texts) based on the deep knowledge tracing (DKT) to obtain a good performed knowledge tracing model.
For the second aspect, we propose and adopt three domain adaptation processes. First, we pre-train an auto-encoder to select useful source instances for target model training.
arXiv Detail & Related papers (2020-01-14T15:04:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.