Generative Self-training for Cross-domain Unsupervised Tagged-to-Cine
MRI Synthesis
- URL: http://arxiv.org/abs/2106.12499v1
- Date: Wed, 23 Jun 2021 16:19:00 GMT
- Title: Generative Self-training for Cross-domain Unsupervised Tagged-to-Cine
MRI Synthesis
- Authors: Xiaofeng Liu, Fangxu Xing, Maureen Stone, Jiachen Zhuo, Reese Timothy,
Jerry L. Prince, Georges El Fakhri, Jonghye Woo
- Abstract summary: We propose a novel generative self-training framework with continuous value prediction and regression objective for cross-domain image synthesis.
Specifically, we propose to filter the pseudo-label with an uncertainty mask, and quantify the predictive confidence of generated images with practical variational Bayes learning.
- Score: 10.636015177721635
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-training based unsupervised domain adaptation (UDA) has shown great
potential to address the problem of domain shift, when applying a trained deep
learning model in a source domain to unlabeled target domains. However, while
the self-training UDA has demonstrated its effectiveness on discriminative
tasks, such as classification and segmentation, via the reliable pseudo-label
selection based on the softmax discrete histogram, the self-training UDA for
generative tasks, such as image synthesis, is not fully investigated. In this
work, we propose a novel generative self-training (GST) UDA framework with
continuous value prediction and regression objective for cross-domain image
synthesis. Specifically, we propose to filter the pseudo-label with an
uncertainty mask, and quantify the predictive confidence of generated images
with practical variational Bayes learning. The fast test-time adaptation is
achieved by a round-based alternative optimization scheme. We validated our
framework on the tagged-to-cine magnetic resonance imaging (MRI) synthesis
problem, where datasets in the source and target domains were acquired from
different scanners or centers. Extensive validations were carried out to verify
our framework against popular adversarial training UDA methods. Results show
that our GST, with tagged MRI of test subjects in new target domains, improved
the synthesis quality by a large margin, compared with the adversarial training
UDA methods.
Related papers
- Continual-MAE: Adaptive Distribution Masked Autoencoders for Continual Test-Time Adaptation [49.827306773992376]
Continual Test-Time Adaptation (CTTA) is proposed to migrate a source pre-trained model to continually changing target distributions.
Our proposed method attains state-of-the-art performance in both classification and segmentation CTTA tasks.
arXiv Detail & Related papers (2023-12-19T15:34:52Z) - Robust Source-Free Domain Adaptation for Fundus Image Segmentation [3.585032903685044]
Unlabelled Domain Adaptation (UDA) is a learning technique that transfers knowledge learned in the source domain from labelled data to the target domain with only unlabelled data.
In this study, we propose a two-stage training stage for robust domain adaptation.
We propose a novel robust pseudo-label and pseudo-boundary (PLPB) method, which effectively utilizes unlabeled target data to generate pseudo labels and pseudo boundaries.
arXiv Detail & Related papers (2023-10-25T14:25:18Z) - Attentive Continuous Generative Self-training for Unsupervised Domain
Adaptive Medical Image Translation [12.080054869408213]
We develop a generative self-training framework for domain adaptive image translation with continuous value prediction and regression objectives.
We evaluate our framework on two cross-scanner/center, inter-subject translation tasks, including tagged-to-cine magnetic resonance (MR) image translation and T1-weighted MR-to-fractional anisotropy translation.
arXiv Detail & Related papers (2023-05-23T23:57:44Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Memory Consistent Unsupervised Off-the-Shelf Model Adaptation for
Source-Relaxed Medical Image Segmentation [13.260109561599904]
Unsupervised domain adaptation (UDA) has been a vital protocol for migrating information learned from a labeled source domain to an unlabeled heterogeneous target domain.
We propose "off-the-shelf (OS)" UDA (OSUDA), aimed at image segmentation, by adapting an OS segmentor trained in a source domain to a target domain, in the absence of source domain data in adaptation.
arXiv Detail & Related papers (2022-09-16T13:13:50Z) - Boosting Cross-Domain Speech Recognition with Self-Supervision [35.01508881708751]
Cross-domain performance of automatic speech recognition (ASR) could be severely hampered due to mismatch between training and testing distributions.
Previous work has shown that self-supervised learning (SSL) or pseudo-labeling (PL) is effective in UDA by exploiting the self-supervisions of unlabeled data.
This work presents a systematic UDA framework to fully utilize the unlabeled data with self-supervision in the pre-training and fine-tuning paradigm.
arXiv Detail & Related papers (2022-06-20T14:02:53Z) - ACT: Semi-supervised Domain-adaptive Medical Image Segmentation with
Asymmetric Co-training [34.017031149886556]
Unsupervised domain adaptation (UDA) has been vastly explored to alleviate domain shifts between source and target domains.
We propose to exploit both labeled source and target domain data, in addition to unlabeled target data in a unified manner.
We present a novel asymmetric co-training (ACT) framework to integrate these subsets and avoid the domination of the source domain data.
arXiv Detail & Related papers (2022-06-05T23:48:00Z) - Target and Task specific Source-Free Domain Adaptive Image Segmentation [73.78898054277538]
We propose a two-stage approach for source-free domain adaptive image segmentation.
We focus on generating target-specific pseudo labels while suppressing high entropy regions.
In the second stage, we focus on adapting the network for task-specific representation.
arXiv Detail & Related papers (2022-03-29T17:50:22Z) - Cycle Self-Training for Domain Adaptation [85.14659717421533]
Cycle Self-Training (CST) is a principled self-training algorithm that enforces pseudo-labels to generalize across domains.
CST recovers target ground truth, while both invariant feature learning and vanilla self-training fail.
Empirical results indicate that CST significantly improves over prior state-of-the-arts in standard UDA benchmarks.
arXiv Detail & Related papers (2021-03-05T10:04:25Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z) - Unsupervised Domain Adaptation for Speech Recognition via Uncertainty
Driven Self-Training [55.824641135682725]
Domain adaptation experiments using WSJ as a source domain and TED-LIUM 3 as well as SWITCHBOARD show that up to 80% of the performance of a system trained on ground-truth data can be recovered.
arXiv Detail & Related papers (2020-11-26T18:51:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.