Student Become Decathlon Master in Retinal Vessel Segmentation via
Dual-teacher Multi-target Domain Adaptation
- URL: http://arxiv.org/abs/2203.03631v2
- Date: Wed, 9 Mar 2022 10:51:01 GMT
- Title: Student Become Decathlon Master in Retinal Vessel Segmentation via
Dual-teacher Multi-target Domain Adaptation
- Authors: Linkai Peng, Li Lin, Pujin Cheng, Huaqing He, Xiaoying Tang
- Abstract summary: We propose RVms, a novel unsupervised multi-target domain adaptation approach to segment retinal vessels (RVs) from multimodal and multicenter retinal images.
RVms is found to be very close to the target-trained Oracle in terms of segmenting the RVs, largely outperforming other state-of-the-art methods.
- Score: 1.121358474059223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation has been proposed recently to tackle the
so-called domain shift between training data and test data with different
distributions. However, most of them only focus on single-target domain
adaptation and cannot be applied to the scenario with multiple target domains.
In this paper, we propose RVms, a novel unsupervised multi-target domain
adaptation approach to segment retinal vessels (RVs) from multimodal and
multicenter retinal images. RVms mainly consists of a style augmentation and
transfer (SAT) module and a dual-teacher knowledge distillation (DTKD) module.
SAT augments and clusters images into source-similar domains and
source-dissimilar domains via B\'ezier and Fourier transformations. DTKD
utilizes the augmented and transformed data to train two teachers, one for
source-similar domains and the other for source-dissimilar domains. Afterwards,
knowledge distillation is performed to iteratively distill different domain
knowledge from teachers to a generic student. The local relative intensity
transformation is employed to characterize RVs in a domain invariant manner and
promote the generalizability of teachers and student models. Moreover, we
construct a new multimodal and multicenter vascular segmentation dataset from
existing publicly-available datasets, which can be used to benchmark various
domain adaptation and domain generalization methods. Through extensive
experiments, RVms is found to be very close to the target-trained Oracle in
terms of segmenting the RVs, largely outperforming other state-of-the-art
methods.
Related papers
- Multi-Head Distillation for Continual Unsupervised Domain Adaptation in
Semantic Segmentation [38.10483890861357]
This work focuses on a novel framework for learning UDA, continuous UDA, in which models operate on multiple target domains discovered sequentially.
We propose MuHDi, for Multi-Head Distillation, a method that solves the catastrophic forgetting problem, inherent in continual learning tasks.
arXiv Detail & Related papers (2022-04-25T14:03:09Z) - Unsupervised Domain Adaptation for Cross-Modality Retinal Vessel
Segmentation via Disentangling Representation Style Transfer and
Collaborative Consistency Learning [3.9562534927482704]
We propose DCDA, a novel cross-modality unsupervised domain adaptation framework for tasks with large domain shifts.
Our framework achieves Dice scores close to target-trained oracle both from OCTA to OCT and from OCT to OCTA, significantly outperforming other state-of-the-art methods.
arXiv Detail & Related papers (2022-01-13T07:03:16Z) - Reiterative Domain Aware Multi-target Adaptation [14.352214079374463]
We propose Reiterative D-CGCT (RD-CGCT) that obtains better adaptation performance by reiterating multiple times over each target domain.
RD-CGCT significantly improves the performance over D-CGCT for Office-Home and Office31 datasets.
arXiv Detail & Related papers (2021-08-26T17:12:25Z) - Variational Attention: Propagating Domain-Specific Knowledge for
Multi-Domain Learning in Crowd Counting [75.80116276369694]
In crowd counting, due to the problem of laborious labelling, it is perceived intractability of collecting a new large-scale dataset.
We resort to the multi-domain joint learning and propose a simple but effective Domain-specific Knowledge Propagating Network (DKPNet)
It is mainly achieved by proposing the novel Variational Attention(VA) technique for explicitly modeling the attention distributions for different domains.
arXiv Detail & Related papers (2021-08-18T08:06:37Z) - Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation [78.28390172958643]
We identify two key aspects that can help to alleviate multiple domain-shifts in the multi-target domain adaptation (MTDA)
We propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains.
When the domain labels are available, we propose Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones.
arXiv Detail & Related papers (2021-04-01T23:41:41Z) - Multi-Source Domain Adaptation with Collaborative Learning for Semantic
Segmentation [32.95273803359897]
Multi-source unsupervised domain adaptation(MSDA) aims at adapting models trained on multiple labeled source domains to an unlabeled target domain.
We propose a novel multi-source domain adaptation framework based on collaborative learning for semantic segmentation.
arXiv Detail & Related papers (2021-03-08T12:51:42Z) - Curriculum CycleGAN for Textual Sentiment Domain Adaptation with
Multiple Sources [68.31273535702256]
We propose a novel instance-level MDA framework, named curriculum cycle-consistent generative adversarial network (C-CycleGAN)
C-CycleGAN consists of three components: (1) pre-trained text encoder which encodes textual input from different domains into a continuous representation space, (2) intermediate domain generator with curriculum instance-level adaptation which bridges the gap across source and target domains, and (3) task classifier trained on the intermediate domain for final sentiment classification.
We conduct extensive experiments on three benchmark datasets and achieve substantial gains over state-of-the-art DA approaches.
arXiv Detail & Related papers (2020-11-17T14:50:55Z) - Mutual Learning Network for Multi-Source Domain Adaptation [73.25974539191553]
We propose a novel multi-source domain adaptation method, Mutual Learning Network for Multiple Source Domain Adaptation (ML-MSDA)
Under the framework of mutual learning, the proposed method pairs the target domain with each single source domain to train a conditional adversarial domain adaptation network as a branch network.
The proposed method outperforms the comparison methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-03-29T04:31:43Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z) - Multi-source Domain Adaptation for Visual Sentiment Classification [92.53780541232773]
We propose a novel multi-source domain adaptation (MDA) method, termed Multi-source Sentiment Generative Adversarial Network (MSGAN)
To handle data from multiple source domains, MSGAN learns to find a unified sentiment latent space where data from both the source and target domains share a similar distribution.
Extensive experiments conducted on four benchmark datasets demonstrate that MSGAN significantly outperforms the state-of-the-art MDA approaches for visual sentiment classification.
arXiv Detail & Related papers (2020-01-12T08:37:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.