Adaptive Hierarchical Dual Consistency for Semi-Supervised Left Atrium
Segmentation on Cross-Domain Data
- URL: http://arxiv.org/abs/2109.08311v2
- Date: Mon, 20 Sep 2021 06:48:39 GMT
- Title: Adaptive Hierarchical Dual Consistency for Semi-Supervised Left Atrium
Segmentation on Cross-Domain Data
- Authors: Jun Chen, Heye Zhang, Raad Mohiaddin, Tom Wong, David Firmin, Jennifer
Keegan, and Guang Yang
- Abstract summary: Generalising semi-supervised learning to cross-domain data is of high importance to improve model robustness.
The AHDC consists of a Bidirectional Adversarial Inference module (BAI) and a Hierarchical Dual Consistency learning module (HDC)
We demonstrate the performance of our proposed AHDC on four 3D late gadolinium enhancement cardiac MR (LGE-CMR) datasets from different centres and a 3D CT dataset.
- Score: 8.645556125521246
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Semi-supervised learning provides great significance in left atrium (LA)
segmentation model learning with insufficient labelled data. Generalising
semi-supervised learning to cross-domain data is of high importance to further
improve model robustness. However, the widely existing distribution difference
and sample mismatch between different data domains hinder the generalisation of
semi-supervised learning. In this study, we alleviate these problems by
proposing an Adaptive Hierarchical Dual Consistency (AHDC) for the
semi-supervised LA segmentation on cross-domain data. The AHDC mainly consists
of a Bidirectional Adversarial Inference module (BAI) and a Hierarchical Dual
Consistency learning module (HDC). The BAI overcomes the difference of
distributions and the sample mismatch between two different domains. It mainly
learns two mapping networks adversarially to obtain two matched domains through
mutual adaptation. The HDC investigates a hierarchical dual learning paradigm
for cross-domain semi-supervised segmentation based on the obtained matched
domains. It mainly builds two dual-modelling networks for mining the
complementary information in both intra-domain and inter-domain. For the
intra-domain learning, a consistency constraint is applied to the
dual-modelling targets to exploit the complementary modelling information. For
the inter-domain learning, a consistency constraint is applied to the LAs
modelled by two dual-modelling networks to exploit the complementary knowledge
among different data domains. We demonstrated the performance of our proposed
AHDC on four 3D late gadolinium enhancement cardiac MR (LGE-CMR) datasets from
different centres and a 3D CT dataset. Compared to other state-of-the-art
methods, our proposed AHDC achieved higher segmentation accuracy, which
indicated its capability in the cross-domain semi-supervised LA segmentation.
Related papers
- Improving Intrusion Detection with Domain-Invariant Representation Learning in Latent Space [4.871119861180455]
We introduce a two-phase representation learning technique using multi-task learning.
We disentangle the latent space by minimizing the mutual information between the prior and latent space.
We assess the model's efficacy across multiple cybersecurity datasets.
arXiv Detail & Related papers (2023-12-28T17:24:13Z) - Deliberated Domain Bridging for Domain Adaptive Semantic Segmentation [18.409194129528004]
We propose a deliberated domain bridging (DDB) for dense prediction tasks such as domain adaptive semantic segmentation (DASS)
At the heart of DDB lies a dual-path domain bridging step for generating two intermediate domains using the coarse-wise and the fine-wise data mixing techniques.
Our experiments on adaptive segmentation tasks with different settings demonstrate that our DDB significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-09-16T03:41:09Z) - Unsupervised Domain Adaptation for Cross-Modality Retinal Vessel
Segmentation via Disentangling Representation Style Transfer and
Collaborative Consistency Learning [3.9562534927482704]
We propose DCDA, a novel cross-modality unsupervised domain adaptation framework for tasks with large domain shifts.
Our framework achieves Dice scores close to target-trained oracle both from OCTA to OCT and from OCT to OCTA, significantly outperforming other state-of-the-art methods.
arXiv Detail & Related papers (2022-01-13T07:03:16Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Semi-supervised Domain Adaptation for Semantic Segmentation [3.946367634483361]
We propose a novel two-step semi-supervised dual-domain adaptation (SSDDA) approach to address both cross- and intra-domain gaps in semantic segmentation.
We demonstrate that the proposed approach outperforms state-of-the-art methods on two common synthetic-to-real semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-20T16:13:00Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Dual-Teacher++: Exploiting Intra-domain and Inter-domain Knowledge with
Reliable Transfer for Cardiac Segmentation [69.09432302497116]
We propose a cutting-edge semi-supervised domain adaptation framework, namely Dual-Teacher++.
We design novel dual teacher models, including an inter-domain teacher model to explore cross-modality priors from source domain (e.g., MR) and an intra-domain teacher model to investigate the knowledge beneath unlabeled target domain.
In this way, the student model can obtain reliable dual-domain knowledge and yield improved performance on target domain data.
arXiv Detail & Related papers (2021-01-07T05:17:38Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Dual Distribution Alignment Network for Generalizable Person
Re-Identification [174.36157174951603]
Domain generalization (DG) serves as a promising solution to handle person Re-Identification (Re-ID)
We present a Dual Distribution Alignment Network (DDAN) which handles this challenge by selectively aligning distributions of multiple source domains.
We evaluate our DDAN on a large-scale Domain Generalization Re-ID (DG Re-ID) benchmark.
arXiv Detail & Related papers (2020-07-27T00:08:07Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.