Learning Site-specific Styles for Multi-institutional Unsupervised
Cross-modality Domain Adaptation
- URL: http://arxiv.org/abs/2311.12437v2
- Date: Wed, 22 Nov 2023 11:38:46 GMT
- Title: Learning Site-specific Styles for Multi-institutional Unsupervised
Cross-modality Domain Adaptation
- Authors: Han Liu, Yubo Fan, Zhoubing Xu, Benoit M. Dawant, Ipek Oguz
- Abstract summary: We present our solution to tackle the multi-institutional unsupervised domain adaptation for the crossMoDA 2023 challenge.
Our solution achieved the 1st place during both the validation and testing phases of the challenge.
- Score: 7.282377515210211
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised cross-modality domain adaptation is a challenging task in
medical image analysis, and it becomes more challenging when source and target
domain data are collected from multiple institutions. In this paper, we present
our solution to tackle the multi-institutional unsupervised domain adaptation
for the crossMoDA 2023 challenge. First, we perform unpaired image translation
to translate the source domain images to the target domain, where we design a
dynamic network to generate synthetic target domain images with controllable,
site-specific styles. Afterwards, we train a segmentation model using the
synthetic images and further reduce the domain gap by self-training. Our
solution achieved the 1st place during both the validation and testing phases
of the challenge. The code repository is publicly available at
https://github.com/MedICL-VU/crossmoda2023.
Related papers
- Adaptive Feature Fusion Neural Network for Glaucoma Segmentation on Unseen Fundus Images [13.03504366061946]
We propose a method named Adaptive Feature-fusion Neural Network (AFNN) for glaucoma segmentation on unseen domains.
The domain adaptor helps the pretrained-model fast adapt from other image domains to the medical fundus image domain.
Our proposed method achieves a competitive performance over existing fundus segmentation methods on four public glaucoma datasets.
arXiv Detail & Related papers (2024-04-02T16:30:12Z) - Domain-Controlled Prompt Learning [49.45309818782329]
Existing prompt learning methods often lack domain-awareness or domain-transfer mechanisms.
We propose a textbfDomain-Controlled Prompt Learning for the specific domains.
Our method achieves state-of-the-art performance in specific domain image recognition datasets.
arXiv Detail & Related papers (2023-09-30T02:59:49Z) - Spectral Adversarial MixUp for Few-Shot Unsupervised Domain Adaptation [72.70876977882882]
Domain shift is a common problem in clinical applications, where the training images (source domain) and the test images (target domain) are under different distributions.
We propose a novel method for Few-Shot Unsupervised Domain Adaptation (FSUDA), where only a limited number of unlabeled target domain samples are available for training.
arXiv Detail & Related papers (2023-09-03T16:02:01Z) - DynaGAN: Dynamic Few-shot Adaptation of GANs to Multiple Domains [26.95350186287616]
Few-shot domain adaptation to multiple domains aims to learn a complex image distribution across multiple domains from a few training images.
We propose DynaGAN, a novel few-shot domain-adaptation method for multiple target domains.
arXiv Detail & Related papers (2022-11-26T12:46:40Z) - Target and Task specific Source-Free Domain Adaptive Image Segmentation [73.78898054277538]
We propose a two-stage approach for source-free domain adaptive image segmentation.
We focus on generating target-specific pseudo labels while suppressing high entropy regions.
In the second stage, we focus on adapting the network for task-specific representation.
arXiv Detail & Related papers (2022-03-29T17:50:22Z) - Unsupervised Domain Adaptation for Cross-Modality Retinal Vessel
Segmentation via Disentangling Representation Style Transfer and
Collaborative Consistency Learning [3.9562534927482704]
We propose DCDA, a novel cross-modality unsupervised domain adaptation framework for tasks with large domain shifts.
Our framework achieves Dice scores close to target-trained oracle both from OCTA to OCT and from OCT to OCTA, significantly outperforming other state-of-the-art methods.
arXiv Detail & Related papers (2022-01-13T07:03:16Z) - Discover, Hallucinate, and Adapt: Open Compound Domain Adaptation for
Semantic Segmentation [91.30558794056056]
Unsupervised domain adaptation (UDA) for semantic segmentation has been attracting attention recently.
We present a novel framework based on three main design principles: discover, hallucinate, and adapt.
We evaluate our solution on standard benchmark GTA to C-driving, and achieved new state-of-the-art results.
arXiv Detail & Related papers (2021-10-08T13:20:09Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - DSP: Dual Soft-Paste for Unsupervised Domain Adaptive Semantic
Segmentation [97.74059510314554]
Unsupervised domain adaptation (UDA) for semantic segmentation aims to adapt a segmentation model trained on the labeled source domain to the unlabeled target domain.
Existing methods try to learn domain invariant features while suffering from large domain gaps.
We propose a novel Dual Soft-Paste (DSP) method in this paper.
arXiv Detail & Related papers (2021-07-20T16:22:40Z) - Self-Supervised Learning of Domain Invariant Features for Depth
Estimation [35.74969527929284]
We tackle the problem of unsupervised synthetic-to-realistic domain adaptation for single image depth estimation.
An essential building block of single image depth estimation is an encoder-decoder task network that takes RGB images as input and produces depth maps as output.
We propose a novel training strategy to force the task network to learn domain invariant representations in a self-supervised manner.
arXiv Detail & Related papers (2021-06-04T16:45:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.