Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy
Minimisation for Multi-modal Cardiac Image Segmentation
- URL: http://arxiv.org/abs/2103.08219v1
- Date: Mon, 15 Mar 2021 08:59:44 GMT
- Title: Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy
Minimisation for Multi-modal Cardiac Image Segmentation
- Authors: Sulaiman Vesal, Mingxuan Gu, Ronak Kosti, Andreas Maier, Nishant
Ravikumar
- Abstract summary: We present a novel UDA method for multi-modal cardiac image segmentation.
The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces.
We validated our method on two cardiac datasets by adapting from the annotated source domain to the unannotated target domain.
- Score: 10.417009344120917
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep learning models are sensitive to domain shift phenomena. A model trained
on images from one domain cannot generalise well when tested on images from a
different domain, despite capturing similar anatomical structures. It is mainly
because the data distribution between the two domains is different. Moreover,
creating annotation for every new modality is a tedious and time-consuming
task, which also suffers from high inter- and intra- observer variability.
Unsupervised domain adaptation (UDA) methods intend to reduce the gap between
source and target domains by leveraging source domain labelled data to generate
labels for the target domain. However, current state-of-the-art (SOTA) UDA
methods demonstrate degraded performance when there is insufficient data in
source and target domains. In this paper, we present a novel UDA method for
multi-modal cardiac image segmentation. The proposed method is based on
adversarial learning and adapts network features between source and target
domain in different spaces. The paper introduces an end-to-end framework that
integrates: a) entropy minimisation, b) output feature space alignment and c) a
novel point-cloud shape adaptation based on the latent features learned by the
segmentation model. We validated our method on two cardiac datasets by adapting
from the annotated source domain, bSSFP-MRI (balanced Steady-State Free
Procession-MRI), to the unannotated target domain, LGE-MRI (Late-gadolinium
enhance-MRI), for the multi-sequence dataset; and from MRI (source) to CT
(target) for the cross-modality dataset. The results highlighted that by
enforcing adversarial learning in different parts of the network, the proposed
method delivered promising performance, compared to other SOTA methods.
Related papers
- Unsupervised Federated Domain Adaptation for Segmentation of MRI Images [20.206972068340843]
We develop a method for unsupervised federated domain adaptation using multiple annotated source domains.
Our approach enables the transfer of knowledge from several annotated source domains to adapt a model for effective use in an unannotated target domain.
arXiv Detail & Related papers (2024-01-02T00:31:41Z) - Subject-Based Domain Adaptation for Facial Expression Recognition [51.10374151948157]
Adapting a deep learning model to a specific target individual is a challenging facial expression recognition task.
This paper introduces a new MSDA method for subject-based domain adaptation in FER.
It efficiently leverages information from multiple source subjects to adapt a deep FER model to a single target individual.
arXiv Detail & Related papers (2023-12-09T18:40:37Z) - Source-Free Domain Adaptation for Medical Image Segmentation via
Prototype-Anchored Feature Alignment and Contrastive Learning [57.43322536718131]
We present a two-stage source-free domain adaptation (SFDA) framework for medical image segmentation.
In the prototype-anchored feature alignment stage, we first utilize the weights of the pre-trained pixel-wise classifier as source prototypes.
Then, we introduce the bi-directional transport to align the target features with class prototypes by minimizing its expected cost.
arXiv Detail & Related papers (2023-07-19T06:07:12Z) - SMC-UDA: Structure-Modal Constraint for Unsupervised Cross-Domain Renal
Segmentation [100.86339246424541]
We propose a novel Structure-Modal Constrained (SMC) UDA framework based on a discriminative paradigm and introduce edge structure as a bridge between domains.
With the structure-constrained self-learning and progressive ROI, our methods segment the kidney by locating the 3D spatial structure of the edge.
experiments show that our proposed SMC-UDA has a strong generalization and outperforms generative UDA methods.
arXiv Detail & Related papers (2023-06-14T02:57:23Z) - Memory Consistent Unsupervised Off-the-Shelf Model Adaptation for
Source-Relaxed Medical Image Segmentation [13.260109561599904]
Unsupervised domain adaptation (UDA) has been a vital protocol for migrating information learned from a labeled source domain to an unlabeled heterogeneous target domain.
We propose "off-the-shelf (OS)" UDA (OSUDA), aimed at image segmentation, by adapting an OS segmentor trained in a source domain to a target domain, in the absence of source domain data in adaptation.
arXiv Detail & Related papers (2022-09-16T13:13:50Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Unsupervised Domain Adaptation with Variational Approximation for
Cardiac Segmentation [15.2292571922932]
Unsupervised domain adaptation is useful in medical image segmentation.
We propose a new framework, where the latent features of both domains are driven towards a common and parameterized variational form.
This is achieved by two networks based on variational auto-encoders (VAEs) and a regularization for this variational approximation.
arXiv Detail & Related papers (2021-06-16T13:00:39Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Privacy Preserving Domain Adaptation for Semantic Segmentation of
Medical Images [13.693640425403636]
Unsupervised domain adaptation (UDA) is proposed to adapt a model to new modalities using solely unlabeled target domain data.
We develop an algorithm for UDA in a privacy-constrained setting, where the source domain data is inaccessible.
We demonstrate the effectiveness of our algorithm by comparing it to state-of-the-art medical image semantic segmentation approaches.
arXiv Detail & Related papers (2021-01-02T22:12:42Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.