CADA: Multi-scale Collaborative Adversarial Domain Adaptation for
Unsupervised Optic Disc and Cup Segmentation
- URL: http://arxiv.org/abs/2110.02417v1
- Date: Tue, 5 Oct 2021 23:44:26 GMT
- Title: CADA: Multi-scale Collaborative Adversarial Domain Adaptation for
Unsupervised Optic Disc and Cup Segmentation
- Authors: Peng Liu, Charlie T. Tran, Bin Kong, Ruogu Fang
- Abstract summary: We propose a novel unsupervised domain adaptation framework, called Collaborative Adrial Domain Adaptation (CADA)
Our proposed CADA is an interactive paradigm that presents an exquisite collaborative adaptation through both adversarial learning and ensembling weights at different network layers.
We show that our CADA model incorporating multi-scale input training can overcome performance degradation and outperform state-of-the-art domain adaptation methods.
- Score: 3.587294308501889
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The diversity of retinal imaging devices poses a significant challenge:
domain shift, which leads to performance degradation when applying the deep
learning models trained on one domain to new testing domains. In this paper, we
propose a multi-scale input along with multiple domain adaptors applied
hierarchically in both feature and output spaces. The proposed training
strategy and novel unsupervised domain adaptation framework, called
Collaborative Adversarial Domain Adaptation (CADA), can effectively overcome
the challenge. Multi-scale inputs can reduce the information loss due to the
pooling layers used in the network for feature extraction, while our proposed
CADA is an interactive paradigm that presents an exquisite collaborative
adaptation through both adversarial learning and ensembling weights at
different network layers. In particular, to produce a better prediction for the
unlabeled target domain data, we simultaneously achieve domain invariance and
model generalizability via adversarial learning at multi-scale outputs from
different levels of network layers and maintaining an exponential moving
average (EMA) of the historical weights during training. Without annotating any
sample from the target domain, multiple adversarial losses in encoder and
decoder layers guide the extraction of domain-invariant features to confuse the
domain classifier. Meanwhile, the ensembling of weights via EMA reduces the
uncertainty of adapting multiple discriminator learning. Comprehensive
experimental results demonstrate that our CADA model incorporating multi-scale
input training can overcome performance degradation and outperform
state-of-the-art domain adaptation methods in segmenting retinal optic disc and
cup from fundus images stemming from the REFUGE, Drishti-GS, and Rim-One-r3
datasets.
Related papers
- DynaGAN: Dynamic Few-shot Adaptation of GANs to Multiple Domains [26.95350186287616]
Few-shot domain adaptation to multiple domains aims to learn a complex image distribution across multiple domains from a few training images.
We propose DynaGAN, a novel few-shot domain-adaptation method for multiple target domains.
arXiv Detail & Related papers (2022-11-26T12:46:40Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Dispensed Transformer Network for Unsupervised Domain Adaptation [21.256375606219073]
A novel unsupervised domain adaptation (UDA) method named dispensed Transformer network (DTNet) is introduced in this paper.
Our proposed network achieves the best performance in comparison with several state-of-the-art techniques.
arXiv Detail & Related papers (2021-10-28T08:27:44Z) - Improving Transferability of Domain Adaptation Networks Through Domain
Alignment Layers [1.3766148734487902]
Multi-source unsupervised domain adaptation (MSDA) aims at learning a predictor for an unlabeled domain by assigning weak knowledge from a bag of source models.
We propose to embed Multi-Source version of DomaIn Alignment Layers (MS-DIAL) at different levels of the predictor.
Our approach can improve state-of-the-art MSDA methods, yielding relative gains of up to +30.64% on their classification accuracies.
arXiv Detail & Related papers (2021-09-06T18:41:19Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Unsupervised Domain Adaptation for Retinal Vessel Segmentation with
Adversarial Learning and Transfer Normalization [22.186070895966022]
We propose an entropy-based adversarial learning strategy to reduce the distribution discrepancy between source and target domains.
A new transfer normalization layer is proposed to further boost the transferability of the deep network.
Our approach yields significant performance gains compared to other state-of-the-art methods.
arXiv Detail & Related papers (2021-08-04T02:45:37Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Learning a Domain-Agnostic Visual Representation for Autonomous Driving
via Contrastive Loss [25.798361683744684]
Domain-Agnostic Contrastive Learning (DACL) is a two-stage unsupervised domain adaptation framework with cyclic adversarial training and contrastive loss.
Our proposed approach achieves better performance in the monocular depth estimation task compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-10T07:06:03Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Few-Shot Learning as Domain Adaptation: Algorithm and Analysis [120.75020271706978]
Few-shot learning uses prior knowledge learned from the seen classes to recognize the unseen classes.
This class-difference-caused distribution shift can be considered as a special case of domain shift.
We propose a prototypical domain adaptation network with attention (DAPNA) to explicitly tackle such a domain shift problem in a meta-learning framework.
arXiv Detail & Related papers (2020-02-06T01:04:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.