Attentive Continuous Generative Self-training for Unsupervised Domain
Adaptive Medical Image Translation
- URL: http://arxiv.org/abs/2305.14589v1
- Date: Tue, 23 May 2023 23:57:44 GMT
- Title: Attentive Continuous Generative Self-training for Unsupervised Domain
Adaptive Medical Image Translation
- Authors: Xiaofeng Liu, Jerry L. Prince, Fangxu Xing, Jiachen Zhuo, Reese
Timothy, Maureen Stone, Georges El Fakhri, Jonghye Woo
- Abstract summary: We develop a generative self-training framework for domain adaptive image translation with continuous value prediction and regression objectives.
We evaluate our framework on two cross-scanner/center, inter-subject translation tasks, including tagged-to-cine magnetic resonance (MR) image translation and T1-weighted MR-to-fractional anisotropy translation.
- Score: 12.080054869408213
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-training is an important class of unsupervised domain adaptation (UDA)
approaches that are used to mitigate the problem of domain shift, when applying
knowledge learned from a labeled source domain to unlabeled and heterogeneous
target domains. While self-training-based UDA has shown considerable promise on
discriminative tasks, including classification and segmentation, through
reliable pseudo-label filtering based on the maximum softmax probability, there
is a paucity of prior work on self-training-based UDA for generative tasks,
including image modality translation. To fill this gap, in this work, we seek
to develop a generative self-training (GST) framework for domain adaptive image
translation with continuous value prediction and regression objectives.
Specifically, we quantify both aleatoric and epistemic uncertainties within our
GST using variational Bayes learning to measure the reliability of synthesized
data. We also introduce a self-attention scheme that de-emphasizes the
background region to prevent it from dominating the training process. The
adaptation is then carried out by an alternating optimization scheme with
target domain supervision that focuses attention on the regions with reliable
pseudo-labels. We evaluated our framework on two cross-scanner/center,
inter-subject translation tasks, including tagged-to-cine magnetic resonance
(MR) image translation and T1-weighted MR-to-fractional anisotropy translation.
Extensive validations with unpaired target domain data showed that our GST
yielded superior synthesis performance in comparison to adversarial training
UDA methods.
Related papers
- Memory Consistent Unsupervised Off-the-Shelf Model Adaptation for
Source-Relaxed Medical Image Segmentation [13.260109561599904]
Unsupervised domain adaptation (UDA) has been a vital protocol for migrating information learned from a labeled source domain to an unlabeled heterogeneous target domain.
We propose "off-the-shelf (OS)" UDA (OSUDA), aimed at image segmentation, by adapting an OS segmentor trained in a source domain to a target domain, in the absence of source domain data in adaptation.
arXiv Detail & Related papers (2022-09-16T13:13:50Z) - Source-free Unsupervised Domain Adaptation for Blind Image Quality
Assessment [20.28784839680503]
Existing learning-based methods for blind image quality assessment (BIQA) are heavily dependent on large amounts of annotated training data.
In this paper, we take the first step towards the source-free unsupervised domain adaptation (SFUDA) in a simple yet efficient manner.
We present a group of well-designed self-supervised objectives to guide the adaptation of the BN affine parameters towards the target domain.
arXiv Detail & Related papers (2022-07-17T09:42:36Z) - Boosting Cross-Domain Speech Recognition with Self-Supervision [35.01508881708751]
Cross-domain performance of automatic speech recognition (ASR) could be severely hampered due to mismatch between training and testing distributions.
Previous work has shown that self-supervised learning (SSL) or pseudo-labeling (PL) is effective in UDA by exploiting the self-supervisions of unlabeled data.
This work presents a systematic UDA framework to fully utilize the unlabeled data with self-supervision in the pre-training and fine-tuning paradigm.
arXiv Detail & Related papers (2022-06-20T14:02:53Z) - Boosting Unsupervised Domain Adaptation with Soft Pseudo-label and
Curriculum Learning [19.903568227077763]
Unsupervised domain adaptation (UDA) improves classification performance on an unlabeled target domain by leveraging data from a fully labeled source domain.
We propose a model-agnostic two-stage learning framework, which greatly reduces flawed model predictions using soft pseudo-label strategy.
At the second stage, we propose a curriculum learning strategy to adaptively control the weighting between losses from the two domains.
arXiv Detail & Related papers (2021-12-03T14:47:32Z) - Generative Self-training for Cross-domain Unsupervised Tagged-to-Cine
MRI Synthesis [10.636015177721635]
We propose a novel generative self-training framework with continuous value prediction and regression objective for cross-domain image synthesis.
Specifically, we propose to filter the pseudo-label with an uncertainty mask, and quantify the predictive confidence of generated images with practical variational Bayes learning.
arXiv Detail & Related papers (2021-06-23T16:19:00Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Cycle Self-Training for Domain Adaptation [85.14659717421533]
Cycle Self-Training (CST) is a principled self-training algorithm that enforces pseudo-labels to generalize across domains.
CST recovers target ground truth, while both invariant feature learning and vanilla self-training fail.
Empirical results indicate that CST significantly improves over prior state-of-the-arts in standard UDA benchmarks.
arXiv Detail & Related papers (2021-03-05T10:04:25Z) - Unsupervised Domain Adaptation for Speech Recognition via Uncertainty
Driven Self-Training [55.824641135682725]
Domain adaptation experiments using WSJ as a source domain and TED-LIUM 3 as well as SWITCHBOARD show that up to 80% of the performance of a system trained on ground-truth data can be recovered.
arXiv Detail & Related papers (2020-11-26T18:51:26Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z) - Structured Domain Adaptation with Online Relation Regularization for
Unsupervised Person Re-ID [62.90727103061876]
Unsupervised domain adaptation (UDA) aims at adapting the model trained on a labeled source-domain dataset to an unlabeled target-domain dataset.
We propose an end-to-end structured domain adaptation framework with an online relation-consistency regularization term.
Our proposed framework is shown to achieve state-of-the-art performance on multiple UDA tasks of person re-ID.
arXiv Detail & Related papers (2020-03-14T14:45:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.